doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.15895 | 36 | Effect of the Model Size for Classifiers. We experiment with other model choices in addition to the BERT-base-uncased [11] used throughout the paper. They are TinyBERT [19], DistillBERT [45], DeBERTa-V3-base, and DeBERTa-V3-Large [18] with parameter size from 14M to 435M8. We visualize the results in Figure 9. Overall, AttrPrompt outperforms SimPrompt by a large margin yet underperforms the Gold across different model choices. With a light-weighted backbone of 66M parameters, AttrPrompt can often outperform the SimPrompt trained with the model containing 435M parameters. This indicates that diversely attributed prompts could help close the performance gap between the Gold and simple class-conditional prompts, and such an improvement is robust to model parameter size.
8Additionally, we also evaluate under the setting of linear probing [22] which trains a linear classifier over the frozen pretrained features. The results are deferred to Appendix C.1.
10
(a) Amazon (b) NYT (c) Reddit (d) StackExchange
Figure 9: The performance curves with classifier models of different parameter sizes.
# 6.5 Plugging AttrPrompt in existing approaches | 2306.15895#36 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 37 | Figure 9: The performance curves with classifier models of different parameter sizes.
# 6.5 Plugging AttrPrompt in existing approaches
In this section, we demonstrate that AttrPrompt can be painlessly integrated with prior zero-shot training data generation techniques. Table 9 shows the results for several recently proposed methods, which design additional techniques based on the noisy-robust loss to further reduce the effect of noisy labeled data [13, 30, 56], and leverage in-context examples for data generation [56]. Despite these approaches achieving notable performance gains on simple binary classification tasks, their gains become more marginal for fine-grained classification: the performance gain is less than 2% for all methods on two datasets. Instead, using AttrPrompt lead to consistent performance boosts (more than 5% in all cases) for those approaches, indicating that compared with label noise, data diversity is a more crucial bottleneck for existing dataset generation methods. | 2306.15895#37 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 37 | [5] A. P. Davis, C. J. Grondin, R. J. Johnson, D. Sciaky, J. Wiegers, T. C. Wiegers, and C. J. Mattingly. Comparative toxicogenomics database (ctd): update 2021. Nucleic acids research, 49(D1):D1138âD1143, 2021.
[6] J. Devlin, M. Chang, K. Lee, and K. Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018.
[7] D. Fu, M. Chen, F. Sala, S. Hooper, K. Fatahalian, and C. Ré. Fast and three-rious: Speeding up weak supervision with triplet methods. In International Conference on Machine Learning, pages 3280â3291. PMLR, 2020.
[8] Y. Gu, R. Tinn, H. Cheng, M. Lucas, N. Usuyama, X. Liu, T. Naumann, J. Gao, and H. Poon. Domain-specific language model pretraining for biomedical natural language processing, 2020. | 2306.16564#37 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 38 | More interestingly, AttrPrompt even benefits dataset generation approaches that do not use LLMs. To demonstrate this, we use the LLM-generated contents (subtopics for NYT and product name for Amazon) to enrich the label names used in ReGen [59], a retrieval-based approach for training data generation. With the expanded label names, AttrPrompt largely improves (14%â26% absolute gain) the performance of ReGen on fine-grained classification tasks. These results justify the advantage of AttrPrompt for serving as a generic plug-in module for existing training data generation approaches.
Table 9: Performance comparison when AttrPrompt serves as a plug-in for existing approaches.
SuperGen SuperGen w/ AttrPrompt SunGen SunGen w/ AttrPrompt ProGen ProGen w/ AttrPrompt ReGen ReGen w/ AttrPrompt NYT Acc. F1 76.11 76.80 82.05 82.62 75.82 76.52 81.65 82.70 77.05 76.70 80.93 81.32 70.01 68.14 82.18 82.50 Amazon Acc. F1 58.17 56.06 66.76 66.33 54.30 53.50 63.89 63.76 58.40 56.95 66.43 66.02 34.70 30.93 58.40 56.00 | 2306.15895#38 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 38 | [9] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On calibration of modern neural networks. In International conference on machine learning, pages 1321â1330. PMLR, 2017.
[10] I. Hendrickx, S. N. Kim, Z. Kozareva, P. Nakov, D. O. Séaghdha, S. Padó, M. Pennacchiotti, L. Romano, and S. Szpakowicz. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. arXiv preprint arXiv:1911.10422, 2019.
[11] R. Hoffmann, C. Zhang, X. Ling, L. Zettlemoyer, and D. S. Weld. Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies, pages 541â550, 2011.
[12] C.-L. Hwang and A. S. M. Masud. Multiple objective decision makingâmethods and applica- tions: a state-of-the-art survey, volume 164. Springer Science & Business Media, 2012. | 2306.16564#38 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 39 | # 6.6 Extension to multi-label classification
In this section, we take the first attempt to extend the paradigm of using the LLM as a training data generator to the more challenging multi-label classification setting. In particular, we adopt the arXiv dataset [9] consisting of 98 fine-grained classes, on which we apply both SimPrompt and AttrPrompt. Following [16, 48], we consider different evaluation metrics including Micro/Macro-F1, Precision@k, Normalized Discounted Cumulative Gain (nDCG@k), and Mean Reciprocal Rank (MRR). The experimental details are in Appendix A. We present the results in Table 10. Similar to our findings for single-label classification, AttrPrompt largely outperforms SimPrompt across all the metrics, which not only strengthens the superiority of AttrPrompt but also opens the door to using LLM as a training data generator for future research in multi-label classification.
Table 10: Multi-label classification performance (in %) on the arXiv dataset. | 2306.15895#39 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 39 | [13] Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. J. Bang, A. Madotto, and P. Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1â38, 2023.
10
[14] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[15] M. Krallinger, O. Rabal, S. A. Akhondi, M. P. Pérez, J. SantamarÃa, G. P. RodrÃguez, G. Tsat- saronis, A. Intxaurrondo, J. A. López, U. Nandal, et al. Overview of the biocreative vi chemical-protein interaction track. In Proceedings of the sixth BioCreative challenge evaluation workshop, volume 1, pages 141â146, 2017.
[16] H. Lang and H. Poon. Self-supervised self-supervision by combining deep learning and probabilistic logic. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 4978â4986, 2021. | 2306.16564#39 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 40 | Table 10: Multi-label classification performance (in %) on the arXiv dataset.
Method Macro F1 Micro F1 Precision@1 Precision@5 NDCG@5 MRR Price/1k Gold SimPrompt AttrPrompt 27.34 21.03 27.10 58.22 26.75 37.88 73.71 37.00 49.27 27.00 15.08 18.79 79.70 42.49 54.74 82.16 49.60 61.23 â 1.41 1.53
11
# a
# 7 Limitation, Social Impact, and Future Work
# 7.1 Limitation
One limitation of this research is that the investigation focuses primarily on text classification. While this provides valuable insights into the performance of attributed prompts compared to simple class- conditional prompts in these specific contexts, the generalizability of the findings to other data types and tasks remains to be explored. In addition, the AttrPrompt assumes the label name of the target task following prior works [30, 55] and requires some extent of human effort in creating data attributes. | 2306.15895#40 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 40 | [17] J. Li, Y. Sun, R. J. Johnson, D. Sciaky, C.-H. Wei, R. Leaman, A. P. Davis, C. J. Mattingly, T. C. Wiegers, and Z. Lu. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016, 2016.
[18] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
[19] P. Manakul, A. Liusie, and M. J. Gales. Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models. arXiv preprint arXiv:2303.08896, 2023.
[20] R. Müller, S. Kornblith, and G. E. Hinton. When does label smoothing help? Advances in neural information processing systems, 32, 2019.
[21] M. P. Naeini, G. Cooper, and M. Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In Proceedings of the AAAI conference on artificial intelligence, volume 29, 2015.
[22] OpenAI. Gpt-4 technical report, 2023. | 2306.16564#40 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 41 | In addition, one issue with LLM-based training data generation is the phenomenon of hallucination, wherein the model generates information that may sound plausible but is, in fact, not accurate or grounded in reality [2]. This can lead to the propagation of misinformation, particularly in sensitive areas such as news generation where accuracy is of utmost importance. To alleviate this issue, it is possible to leverage additional fact-checking mechanisms to cross-verify the generated text with a reliable knowledge base or dataset [39]. Furthermore, incorporating an additional layer of human review can also help in mitigating hallucinations and ensuring the faithfulness of generated outputs.
# 7.2 Social impact
The findings of this research have significant implications for the development of trustworthy machine learning models. By addressing the limitations of simple class-conditional prompts and exploring the potential of attributed prompts, the research aims to mitigate biases and enhance the diversity of generated training data. This has a positive social impact as it contributes to the development of more robust and fair machine learning models, reducing the potential for biased decision-making systems and promoting inclusivity in various domains. We do not involve human subjects research and do not contain any personally identifiable information. Possible misuse may lead to negative outcomes, such as intentionally introducing biases to the generated dataset via attributes.
# 7.3 Future work | 2306.15895#41 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 41 | [22] OpenAI. Gpt-4 technical report, 2023.
[23] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike, and R. Lowe. Training language models to follow instructions with human feedback, 2022.
[24] V. Pareto. Cours dâéconomie politique, volume 1. Librairie Droz, 1964.
[25] J. Platt et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3):61â74, 1999.
[26] A. Ratner, S. H. Bach, H. Ehrenberg, J. Fries, S. Wu, and C. Ré. Snorkel: Rapid training data creation with weak supervision. In Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases, volume 11, page 269. NIH Public Access, 2017. | 2306.16564#41 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 42 | # 7.3 Future work
Broader Coverage of Task and Modalities. In future work, it would be valuable to extend the investigation to other types of classification tasks beyond text classification. Exploring the effectiveness of attributed prompts in image classification, audio classification, or other modalities could provide insights into the broader applicability and effectiveness of AttrPrompt.
Automatic Prompt Optimization. In this work, we aim to demonstrate the usefulness of attributes and why they are useful through the lens of diversity and bias, and we believe this opens the door for future work to further explore the methodology for automatic attribute discovery and selection for optimal performance, such as exploring automated or semi-automated methods for identifying high-quality attribute dimensions and values. One promising direction is to incorporate a few labeled examples as demonstrations for generating better training data. Under this scenario, leveraging Language Models can aid in automatically detecting diverse attributes, modeling the high-order interactions among attributes as well as estimating attribute value distributions for each class, which further reduces the human efforts for selecting key attributes.
# 8 Conclusion | 2306.15895#42 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 42 | [27] A. Ratner, B. Hancock, J. Dunnmon, F. Sala, S. Pandey, and C. Ré. Training complex mod- els with multi-task weak supervision. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4763â4771, 2019.
[28] A. J. Ratner, C. M. De Sa, S. Wu, D. Selsam, and C. Ré. Data programming: Creating large training sets, quickly. Advances in neural information processing systems, 29, 2016.
[29] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. Journal of machine learning research, 11(4), 2010.
[30] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. | 2306.16564#42 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 43 | # 8 Conclusion
We delve into the realm of training data generation using complex, attributed prompts, which possess the potential to produce a wide range of diverse and attributed generated data. Specifically, we focus on datasets characterized by diverse domains and high cardinality and class-imbalance, and our results demonstrate the superior performance of attributed prompts compared to simple class-conditional prompts. Furthermore, we present a comprehensive empirical study on training data generation that covers essential aspects such as bias, diversity, and efficiency.
12
# References
[1] Neel Alex, Eli Lifland, Lewis Tunstall, Abhishek Thakur, Pegah Maham, C. Jess Riedel, Emmie Hine, Carolyn Ashurst, Paul Sedille, Alexis Carlier, Michael Noetel, and Andreas Stuhlmüller. RAFT: A real-world few-shot text classification benchmark. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2021. | 2306.15895#43 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 43 | [31] P. Varma, F. Sala, S. Sagawa, J. Fries, D. Fu, S. Khattar, A. Ramamoorthy, K. Xiao, K. Fatahalian, J. Priest, et al. Multi-resolution weak supervision for sequential data. Advances in Neural Information Processing Systems, 32, 2019.
[32] H. Wang and H. Poon. Deep probabilistic logic: A unifying framework for indirect supervision. arXiv preprint arXiv:1808.08485, 2018.
11
[33] X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, and D. Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
[34] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
[35] R. Wu, S.-E. Chen, J. Zhang, and X. Chu. Learning hyper label model for programmatic weak supervision. In The Eleventh International Conference on Learning Representations. | 2306.16564#43 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 44 | [2] Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, et al. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023, 2023. [3] John Blitzer, Mark Dredze, and Fernando Pereira. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th annual meeting of the association of computational linguistics, 2007.
[4] Vadim Borisov, Kathrin Sessler, Tobias Leemann, Martin Pawelczyk, and Gjergji Kasneci. Language models are realistic tabular data generators. In The Eleventh International Conference on Learning Representations, 2023.
[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33, 2020. | 2306.15895#44 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 44 | [36] Y. Xiao, Z. Zhang, Y. Mao, C. Yang, and J. Han. Sais: supervising and augmenting intermediate steps for document-level relation extraction. arXiv preprint arXiv:2109.12093, 2021.
[37] Y. Yu, S. Zuo, H. Jiang, W. Ren, T. Zhao, and C. Zhang. Fine-tuning pre-trained language model with weak supervision: A contrastive-regularized self-training approach. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1063â1077, 2021.
[38] B. Zadrozny and C. Elkan. Transforming classifier scores into accurate multiclass probability estimates. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 694â699, 2002.
[39] J. Zhang, C.-Y. Hsieh, Y. Yu, C. Zhang, and A. Ratner. A survey on programmatic weak supervision. arXiv preprint arXiv:2202.05433, 2022. | 2306.16564#44 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 45 | [6] Yong Cao, Li Zhou, Seolhwa Lee, Laura Cabello, Min Chen, and Daniel Hershcovich. Assessing cross-cultural alignment between ChatGPT and human societies: An empirical study. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP), 2023. [7] Derek Chen, Celine Lee, Yun-Yun Lu, Domenic Rosati, and Zhou Yu. Mixture of soft prompts
for controllable data generation. ArXiv, abs/2303.01580, 2023.
[8] Yew Ken Chia, Lidong Bing, Soujanya Poria, and Luo Si. Relationprompt: Leveraging prompts to generate synthetic data for zero-shot relation triplet extraction. In Findings of the Association for Computational Linguistics: ACL 2022, 2022.
[9] Colin B Clement, Matthew Bierbaum, Kevin P OâKeeffe, and Alexander A Alemi. On the use of arxiv as a dataset. arXiv preprint arXiv:1905.00075, 2019. | 2306.15895#45 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 45 | [40] J. Zhang, Y. Yu, Y. Li, Y. Wang, Y. Yang, M. Yang, and A. Ratner. Wrench: A comprehensive benchmark for weak supervision. arXiv preprint arXiv:2109.11377, 2021.
[41] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023.
[42] Z. Zhao, E. Wallace, S. Feng, D. Klein, and S. Singh. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning, pages 12697â12706. PMLR, 2021.
[43] W. Zhou, H. Lin, B. Y. Lin, Z. Wang, J. Du, L. Neves, and X. Ren. Nero: A neural rule grounding framework for label-efficient relation extraction. In Proceedings of The Web Conference 2020, pages 2166â2176, 2020.
12
# A Proof of Theorems | 2306.16564#45 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 46 | [10] Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. Unsu- pervised cross-lingual representation learning at scale. In Annual Meeting of the Association for Computational Linguistics (ACL), 2019.
[11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019. | 2306.15895#46 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 46 | 12
# A Proof of Theorems
Proposition A.1. Suppose the ground truth label y â Y is unseen, and the imperfect supervision model Î(x) has error rate α with miss-classification evenly distributed:
P(Î(x) = y) = 1 â α, P(Î(x) = yâ²) = α |Y| â 1 , yⲠ̸= y. (8)
Then fitting a model h(x) to Î(x) is equivalent to training on ground truth labels with label smoothing as defined in Section 1.1 by [20].
Proof. Letâs first vectorize y and Î(x). Denote yk = 1 if y = k, otherwise yk = 0. Denote Îk = 1 if Î(x) = k, otherwise Îk = 0. Let pk be the probability of P(h(x) = k). Fitting a model h(x) to Î(x) is essentially minimizing
[iy & [Ay logos) Lk=1 [ iy a =E | $0 -(1 â a) yx - log (pe) - po log (px) [k=1 hI Ak [iy =E | > â(yx(1 â 8) + 8/|9I) log (px) k=1 | 2306.16564#46 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 47 | [12] Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, et al. Datacomp: In search of the next generation of multimodal datasets. arXiv preprint arXiv:2304.14108, 2023. [13] Jiahui Gao, Renjie Pi, Lin Yong, Hang Xu, Jiacheng Ye, Zhiyong Wu, Weizhong Zhang, Xiaodan Liang, Zhenguo Li, and Lingpeng Kong. Self-guided noise-free data generation for efficient zero-shot learning. In International Conference on Learning Representations, 2023. [14] Tianyu Gao, Xingcheng Yao, and Danqi Chen. Simcse: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021.
[15] Gregor Geigle, Nils Reimers, Andreas Rücklé, and Iryna Gurevych. Tweac: transformer with extendable qa agent classifiers. arXiv preprint arXiv:2104.07081, 2021. | 2306.15895#47 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 47 | # where β = |Y|
|Y|â1 α is the label smoothing parameter.
Proposition A.2. Consider target space Y = {â1, 1} for simplicity. Suppose the LLM is arbitrarily accurate with P(Î(x) = y) = p < 1, and the weak independent signal ensemble is modeled by w(x) â¼ N (y · µ, Ï2) with µ > 0, then there always exist a function Ï(Î, w) s.t.
P(Ï(Î(x), w(x)) = y) > p.
Proof. Letâs prove by constructing the function class
Ï(Î, w) = sign(αΠ+ w).
Suppose y = 1 WLOG. Then the accuracy is
P(aA +w > 0) po (422) , (=p) 0(4â*), oO oO
where Φ(·) is the normal distribution CDF. As α â â, P(αΠ+ w > 0) â p as it is equivalent to the LLM itself. We just need to show that the âP(αÎ+w>0) âα
OP(aA + w > 0) Pig (Ht*) âto(H*), Oa oO oO o | 2306.16564#47 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 48 | [16] Chuan Guo, Ali Mousavi, Xiang Wu, Daniel N Holtmann-Rice, Satyen Kale, Sashank Reddi, and Sanjiv Kumar. Breaking the glass ceiling for embedding-based classifiers for large output spaces. Advances in Neural Information Processing Systems, 32, 2019.
[17] Tatsunori B. Hashimoto, Hugh Zhang, and Percy Liang. Unifying human and statistical evaluation for natural language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019.
13
[18] Pengcheng He, Jianfeng Gao, and Weizhu Chen. DeBERTav3: Improving deBERTa using ELECTRA-style pre-training with gradient-disentangled embedding sharing. In The Eleventh International Conference on Learning Representations, 2023.
[19] Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. Tinybert: Distilling bert for natural language understanding. In Findings of EMNLP, 2020. | 2306.15895#48 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 48 | OP(aA + w > 0) Pig (Ht*) âto(H*), Oa oO oO o
As both terms are positive, the condition now is for +
+ â Pp (4 *) jo(4 *) <1 l-p o o
Writing down the normal PDF ¢(-), we just need 2_(, â a)2 R=
2_(, â a)2 2 wap (LM R= A) ay (288) » VOR EL. o l-p 202
which is easily achieved by the exponential function as α â â.
13
Definition A.1 (Pareto optimal harmonizer). hâ â H is a Pareto optimal harmonizer to Î and λ1, · · · , λm, if there does not exist any h â H that Pareto dominates hâ in Problem ??. Mathemati- cally, if we denote λ0 := Î, hâ needs to satisfies the following:
Vj =0,1,---,m, Elé;(h(x), Aj(a@))] < Elâ¬(h* (x), Aj (x))], ah eH, ot-{ Ham Hens Nt Seana ot st
For our specific problem, we propose to approximate the problem by minimize the following Pareto loss function G : Rm+1 | 2306.16564#48 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 49 | [20] Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. Decomposed prompting: A modular approach for solving complex tasks. In The Eleventh International Conference on Learning Representations, 2023.
[21] Hannah Rose Kirk, Yennie Jun, Haider Iqbal, Elias Benussi, Filippo Volpin, Frédéric A. Dreyer, Aleksandar Shtedritski, and Yuki M. Asano. Bias out-of-the-box: An empirical analysis of intersectional occupational biases in popular generative language models. In Neural Information Processing Systems, 2021.
[22] Ananya Kumar, Aditi Raghunathan, Robbie Matthew Jones, Tengyu Ma, and Percy Liang. Fine- tuning can distort pretrained features and underperform out-of-distribution. In International Conference on Learning Representations, 2022.
[23] Yizhi Li, Ge Zhang, Bohao Yang, Chenghua Lin, Anton Ragni, Shi Wang, and Jie Fu. HERB: In Findings of the Measuring hierarchical regional bias in pre-trained language models. Association for Computational Linguistics: AACL-IJCNLP 2022, 2022. | 2306.15895#49 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 49 | For our specific problem, we propose to approximate the problem by minimize the following Pareto loss function G : Rm+1
min hâH Exâ¼X [G (â0(h(x), Î(x)), â1(h(x), λ1(x)), · · · , âm(h(x), λm(x)))]. (9)
We require G : Rm+1 Definition A.2 (Pareto scalarizer). G(â0, â1, · · · , âm) is a Pareto scalarizer, if it satisfies:
+ â R+ to satisfy the following conditions.
⢠G(â0, · · · , ââ² j, · · · , âm) < G(â0, · · · , âj, · · · , âm) if ââ² j < âj, for âj = 0, 1, · · · , m; | 2306.16564#49 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 50 | [24] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110, 2022.
[25] Chen Ling, Xujiang Zhao, Jiaying Lu, Chengyuan Deng, Can Zheng, Junxiang Wang, Tanmoy Chowdhury, Yun Li, Hejie Cui, Tianjiao Zhao, et al. Beyond one-model-fits-all: A survey of domain specialization for large language models. arXiv preprint arXiv:2305.18703, 2023.
[26] Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi. WANLI: Worker and AI collaboration for natural language inference dataset creation. In Findings of the Association for Computational Linguistics: EMNLP 2022, 2022.
[27] Lajanugen Logeswaran, Honglak Lee, and Samy Bengio. Content preserving text generation with attribute controls. Advances in Neural Information Processing Systems, 31, 2018. | 2306.15895#50 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 50 | + â R+ is convex. Theorem A.1. Suppose G : Rm+1 + â R+ is a Pareto scalarizer as in Definition A.2. Solving the problem in Equation 9 approximate a Pareto optimal harmonizer by upperbounding an objective whose optimal solution is Pareto optimal as in Definition A.1.
G : Rm+1
Proof. For convenience, letâs denote
uj(h) := E[âj(h(x), λj(x))], j = 0, 1, · · · , m.
We first show that any hâ minimizing G(u0, u1, · · · , um) is Pareto optimal. Proof by contradiction. Suppose hâ is not Pareto optimal. Then there must exist some hâ² â H Pareto dominating hâ. Without loss of generality, letâs assume uj(hâ²) < uj(hâ), and uk(hâ²) ⤠uk(hâ), âk ̸= j. Then according to Definition A.2 of Pareto scalarizer, | 2306.16564#50 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 51 | [28] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019.
[29] Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, 2011.
[30] Yu Meng, Jiaxin Huang, Yu Zhang, and Jiawei Han. Generating training data with language In Advances in Neural Information models: Towards zero-shot language understanding. Processing Systems, 2022.
[31] Yu Meng, Jiaming Shen, Chao Zhang, and Jiawei Han. Weakly-supervised hierarchical text classification. In Proceedings of the AAAI conference on artificial intelligence, 2019.
[32] Swaroop Mishra, Anjana Arunkumar, Bhavdeep Sachdeva, Chris Bryan, and Chitta Baral. Dqi: Measuring data quality in nlp. arXiv preprint arXiv:2005.00816, 2020. | 2306.15895#51 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 51 | G(u0(hâ²), · · · , uj(hâ²), · · · , um(hâ²)) â¤G(u0(hâ), · · · , uj(hâ²), · · · , um(hâ)) <G(u0(hâ), · · · , uj(hâ), · · · , um(hâ)),
(10)
<G(uo(h*),--+ ,uj(hâ), +++ ,Um(h*)) (1)
<G(uo(h*),-++ ,uj(h*), +++ ,Um(h*)), (12)
which contradicts the assumption that hâ is the minimizer for
G(u0(h), · · · , uj(h), · · · , um(h)).
Therefore, the original statement is true, and minimizing the objective
min hâH G (E[â0(h(x), Î(x))], E[â1(h(x), λ1(x))], · · · , E[âm(h(x), λm(x))]) (13)
gives a Pareto optimal harmonizer. | 2306.16564#51 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 52 | [33] Swaroop Mishra, Daniel Khashabi, Chitta Baral, Yejin Choi, and Hannaneh Hajishirzi. Refram- ing instructional prompts to GPTkâs language. In Findings of the Association for Computational Linguistics: ACL 2022, 2022.
[34] Swaroop Mishra and Elnaz Nouri. HELP ME THINK: A simple prompting strategy for non-experts to create customized content with models. In Findings of the Association for Computational Linguistics: ACL 2023, 2023.
[35] Rafael Müller, Simon Kornblith, and Geoffrey E Hinton. When does label smoothing help? Advances in neural information processing systems, 32, 2019.
# [36] OpenAI. Gpt-4 technical report. arXiv, 2023.
[37] OpenAI. Introducing chatgpt, 2023.
14
[38] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35, 2022. | 2306.15895#52 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 52 | gives a Pareto optimal harmonizer.
Next, we use Jensenâs inequality to upperbound this objective with the objective in problem 9. Using the fact that G is convex, we apply Jensenâs inequality and get
G (E[â0(h(x), Î(x))], E[â1(h(x), λ1(x))], · · · , E[âm(h(x), λm(x))]) ⤠Exâ¼X [G (â0(h(x), Î(x)), â1(h(x), λ1(x)), · · · , âm(h(x), λm(x)))].
Therefore, solving the problem in Equation 9 approximates Pareto optimal harmonizer by upper- bounding Equation 13.
14
(11) (12)
# B Weights for Rebalancing the Sources
In our experiments, we explored four different types of scalarization functions, namely:
Linear scalarizer: G(0, 01,--+ , £m) := Vhto
# Vhto wyl;. (Xyb0
⢠Quadratic scalarizer: G(â0, â1, · · · , âm) := j=0 wjâj . | 2306.16564#52 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 53 | [39] Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, et al. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813, 2023.
[40] Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. ArXiv, abs/2304.03277, 2023.
[41] Ethan Perez, Douwe Kiela, and Kyunghyun Cho. True few-shot learning with language models. In Advances in Neural Information Processing Systems, 2021.
[42] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert- In Proceedings of the 2019 Conference on Empirical Methods in Natural Lan- networks. guage Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019.
[43] Laria Reynolds and Kyle McDonell. Prompt programming for large language models: Beyond the few-shot paradigm. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, 2021. | 2306.15895#53 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 53 | ⢠Quadratic scalarizer: G(â0, â1, · · · , âm) := j=0 wjâj .
2
Euclidean norm scalarizer: G(â0, â1, · · · , âm) := ⥠(w0â0, w1â1, · · · , wmâm) â¥. ⢠Chebyshev scalarizer: G(â0, â1, · · · , âm) := maxm
# j=0 wjâj.
The weights wj â R are parameters of G. In the main text of the paper, we fixed to equal weights âw = â1. Here we introduce three approaches to determine the weighting if necessary.
# Equal Weight The simplest weighting scheme of
w0 = w1 = · · · = wm = 1 m + 1
gives nice performance in practice, and is the method we used for the results in the main body of the paper. The nonlinear Pareto scalarizers have the ability to balance the sources even under equal weights. It is always recommended to start with equal weight. | 2306.16564#53 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 54 | [44] Giuseppe Russo, Nora Hollenstein, Claudiu Musat, and Ce Zhang. Control, generate, augment: A scalable framework for multi-attribute text generation. Findings of the Association for Computational Linguistics: EMNLP 2020, 2020.
[45] Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019.
[46] Timo Schick and Hinrich Schütze. Generating datasets with pretrained language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021.
[47] Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. Synthetic prompting: Generating chain-of-thought demonstrations for large language models. arXiv preprint arXiv:2302.00618, 2023.
[48] Jiaming Shen, Wenda Qiu, Yu Meng, Jingbo Shang, Xiang Ren, and Jiawei Han. Taxoclass: Hierarchical multi-label text classification using only class names. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021. | 2306.15895#54 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 54 | In the case that the supervision sources are highly correlated, or when the quality of the sources varies a lot, we propose the following two approaches utilizing the correlation in prediction residual.
Maximal Eigenvalue of Residual Correlation Suppose we have a pilot harmonizer h0 â H, which can usually be obtained from minimizing a Pareto scalarizer with equal weight, it gives predicted distribution âp(x) â R|Y| for any input x â X , where
pc(x) := P(h0(x) = c).
For any source 0 ⤠j ⤠m, denote the one-hot vector âλj(x) â R|Y| as:
_f 1 ifdAj(2) =, Nje(®) = { 0 otherwise.
The prediction redisual is defined as
ârj(x) := âλj(x) â âp(x), which accounts for the supervision function label for x that is unexplained by the harmonizer h0(x).
In order to rebalance the sources, we consider the correlation matrix C between the prediction residual ârjâs. Specifically, let the covariance be | 2306.16564#54 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 55 | [49] Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, 2013.
[50] Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. Principle-driven self-alignment of language models from scratch with minimal human supervision. arXiv preprint arXiv:2305.03047, 2023.
[51] Guy Tevet and Jonathan Berant. Evaluating the evaluation of diversity in natural language generation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 2021.
[52] Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jinfeng Yi, and James Bailey. Symmetric cross entropy for robust learning with noisy labels. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019. | 2306.15895#55 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 55 | In order to rebalance the sources, we consider the correlation matrix C between the prediction residual ârjâs. Specifically, let the covariance be
Σij := Exâ¼X [âri(x) · ârj(x)] â Exâ¼X [âri(x)] · Exâ¼X [ârj(x)].
The correlation variance is denoted as
Diy Ci = â_. 1 Eu yi
We rebalance the sources according to the eigenvector âvmax â Rm+1 corresponding to the largest eigenvalue of C. In order to get reasonable weights, we first normalize âvmax such that the sum of the entries equals to one. Then we project âvmax to the weights simplex with minimal values
wexcess,j = âwmax = âvmax,j â ϵ m + 1 + ϵ m + 1 wexcess â¥wexcessâ¥1 · (1 â ϵ).
15
This projection ensures that the weight on each source is at least ϵ portion of the value from equal weight, with the minimal ratio threshold ϵ â [0, 1]. | 2306.16564#55 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 56 | [53] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instruc- tions. arXiv preprint arXiv:2212.10560, 2022.
[54] Sarah Wiegreffe, Jack Hessel, Swabha Swayamdipta, Mark Riedl, and Yejin Choi. Reframing In Proceedings of the 2022 human-ai collaboration for generating free-text explanations. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022.
15
[55] Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, and Lingpeng Kong. ZeroGen: Efficient zero-shot learning via dataset generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022.
[56] Jiacheng Ye, Jiahui Gao, Zhiyong Wu, Jiangtao Feng, Tao Yu, and Lingpeng Kong. ProGen: Progressive zero-shot dataset generation via in-context feedback. In Findings of the Association for Computational Linguistics: EMNLP 2022, 2022. | 2306.15895#56 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 56 | 15
This projection ensures that the weight on each source is at least ϵ portion of the value from equal weight, with the minimal ratio threshold ϵ â [0, 1].
Maximal eigenvalue method is recommended when the sources are relatively independent, and when the quality of the sources differ a lot. Intuitively, suppose two sources tends to agree with each other when they are not fitted well by the harmonizer, because there isnât intrinsic dependency between the sources, it is likely that the true label is given by the sources. Therefore, a maximal eigenvalue rebalancing scheme puts higher weights on the sources to encourage the harmonizer to fit to the unexplained examples.
Minimal Variance of Residual Correlation The same as in the maximal eigenvalue method, we consider the correlation matrix C between the prediction residual ârjâs. Instead of finding the maximal eigenvalue of C, we consider solving the following minimal variance problem:
min v vT Cv, s.t. 1T v = 1.
This problem admits the closed form solution of
vmin = C â11 1T C â11 .
Again, we project âvmin to the weights simplex with minimal values
ϵ m+1 :
> ⬠* Wexcess,j = | Umin,j â â m+1 Sy m+1 Wmin = | 2306.16564#56 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 57 | [57] Dian Yu, Zhou Yu, and Kenji Sagae. Attribute alignment: Controlling text generation from pre-trained language models. In Findings of the Association for Computational Linguistics: EMNLP 2021, 2021.
[58] Yue Yu, Chenyan Xiong, Si Sun, Chao Zhang, and Arnold Overwijk. COCO-DR: Combating distribution shifts in zero-shot dense retrieval with contrastive and distributionally robust learning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022.
[59] Yue Yu, Yuchen Zhuang, Rongzhi Zhang, Yu Meng, Jiaming Shen, and Chao Zhang. Regen: Zero-shot text classification via training data generation with progressive dense retrieval. In Findings of the Association for Computational Linguistics: ACL 2023, 2023.
[60] Jieyu Zhang, Bohan Wang, Zhengyu Hu, Pang Wei Koh, and Alexander Ratner. On the trade-off of intra-/inter-class diversity for supervised pre-training. arXiv preprint arXiv:2305.12224, 2023. | 2306.15895#57 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 57 | ϵ m+1 :
> ⬠* Wexcess,j = | Umin,j â â m+1 Sy m+1 Wmin =
which ensures that the weight on each source is at least ϵ portion of the value from equal weight, with the minimal ratio threshold ϵ â [0, 1].
The minimal variance method is a classical portfolio rebalancing strategy in financial mathematics. The intuition behind the algorithm is minimizing the risk by diversification. This rebalancing scheme is useful when there are intrinsic dependency between the sources. Suppose two sources are duplicates and always tend to give the same label, their residuals should also be highly correlated. Minimal variance optimization automatically avoid putting too much weights on the duplicating sources.
While the equal weight method typically delivers good results in the simplest way, the other two rebalancing schemes are designed to address the specific concern such as source dependency and quality. It is always recommended to check against the labels on a validation set if available.
16
# C Training details
We explored different configurations of Pareto optimal learning below:
⢠Harmonizer model: we experiment 1. BERT [6] (PubMedBERT [8] for biomedical datasets CDR and ChemProt), 2. multi-layer perceptron (MLP), 3. Logistic regression (LR). The last two are built on top of the last layer embedding of the corresponding BERT model. | 2306.16564#57 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 58 | [61] Rongzhi Zhang, Yue Yu, Pranav Shetty, Le Song, and Chao Zhang. Prboost: Prompt-based rule discovery and boosting for interactive weakly-supervised learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022.
[62] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
[63] Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. Advances in neural information processing systems, 28, 2015.
[64] Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. Large language models are human-level prompt engineers. In The Eleventh International Conference on Learning Representations, 2023. | 2306.15895#58 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 58 | ⢠Pareto loss scalarizer: we experiment all four loss scalarization functions as defined in
Section 3.3, namely linear, quadratic, Euclidean norm, and Chebyshevy scalarization. ⢠Optimizer: We use AdamW [18] optimizer with learning rate [10â4, 10â5, 10â6], weight decay [10â4, 10â5], batch size 16. All hyperparameters are optimized on held out dev set.
⢠Computation: We trained on Azure Standard NC12s v3 with 1 Nvidia V100 GPU.
# D LLM Prompting Details
In this section we will describe the details of the prompts used to query the LLMs.
# D.1 Out-of-the-box prompt
⢠Setting: describe the role of the LLM in the task, and the overall problem setting.
⢠Background: necessary background knowledge for domain specific tasks, including infor- mation from annotation guidelines for human annotators.
Data structure: for relation extraction tasks, explain the definition of each entity.
⢠Desired output: describe the list of the desired output. For each of the categories, provide explanation and optionally some examples.
⢠Chain of thought (CoT): instruction to encourage the LLM to think step-by-step, articulate point-by-point, and give the response in the desired structure. | 2306.16564#58 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 59 | [65] Terry Yue Zhuo, Yujin Huang, Chunyang Chen, and Zhenchang Xing. Exploring ai ethics of chatgpt: A diagnostic analysis. ArXiv, abs/2301.12867, 2023.
16
# A Dataset Preprocessing
# A.1 Multi-class Classification
For Reddit and StackOverflow, we select the classes with more than 65 examples from the original corpus as the target set of topics. For each dataset, we use 50 examples per class for the test set and no more than 10 examples for the validation set (10 for NYT/Amazon and 5 for Reddit/StackOverflow). The remaining data is used to compose the gold training set. It is worth noting that, some of the class names on Reddit may contain toxic information. To eliminate their effects, we filter label names with Detoxify9, a tool that leverages the multilingual XLM-RoBERTa [10] for toxic comment identification. We follow [12] to use a threshold of 0.1 to filter out potentially toxic topic classes.
# A.2 Multi-label Classification | 2306.15895#59 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 59 | ⢠Chain of thought (CoT): instruction to encourage the LLM to think step-by-step, articulate point-by-point, and give the response in the desired structure.
Confidence: ask the model to state âunsureâ if it is not confident about the answer.
⢠Example: state the example and ask the model to perform the task.
Each prompt for out-of-the-box (zero-shot) prediction contains:
⢠A problem setting part that depends on the specific dataset.
⢠A response regularization part that encourages chain-of-thought (CoT) and confidence check, and specifies proper response format.
A task instance part that contains the input instance and restates the task to perform.
# Problem setting prompt
⢠CDR: âYou are an intelligent assistant to extract chemical-disease relations from academic literature. Your job is to determine if in the given piece of text, the drug (entity 1) induces the disease (entity 2) or not. Negative means the drug does NOT induce the disease. Positive means the drug induces the disease. Please use your judgement to the best of your knowledge. Your answer should be classified into the following categories: [Negative, Positive]. â | 2306.16564#59 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 60 | # A.2 Multi-label Classification
For the arXiv multi-label classification, we use the arXiv labels from the website10 as the label names. We select topics under the category of Physics, Mathematics, Computer Science, Quantitative Biology, and Quantitative Finance. Besides, we remove the topic of General Physics, General Mathematics, General Literature, Other Computer Science, Other Quantitative Biology as they only contain generic topics without concrete semantics. In total, there are 98 classes in general.
To generate examples using the SimPrompt approach, we follow a specific procedure. For each document, we begin by sampling from the distribution of label numbers, utilizing prior knowledge obtained from the original training set. This enables us to determine the precise number of labels, denoted as n, that should be associated with the document. Once we have determined n, we proceed to randomly select n classes from the original set of 98 labels. These chosen classes are then utilized as the input for prompting the LLM, forming a foundation for generating the examples. | 2306.15895#60 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 60 | ⢠ChemProt: âYou are an intelligent assistant to extract chemical-protein interaction from academic literature. Your task is to identify the chemical-protein interactions (CHEMPROT) between entity 2: Chemical Entities Mentions (CEMs) and entity 1: Gene and Protein Related Objects (named as GPRO in the instruction below) in the given piece of text. In brief, the chemical-protein interactions include direct interactions (when a physical contact exits between a CEM and a GPRO, in most cases this GPRO being a protein or protein
17
family and alters its function/activity) as well as indirect regulatory interactions between CEMs and GPROs (including genes, gene products (proteins, RNA), DNA/protein sequence elements and protein families, domains and complexes) that alter either the function or the quantity of the GPRO. The guidelines below provide curation rules to evaluate if the given sentence contains a description of a chemical-protein interaction; in particular, if sufficient detail/evidence is provided for comentioned CEMs and GPROs. Additionally, it describes curation rules and definitions to assign each identified chemical-protein interaction to any of the 10 classes, with detailed description listed below: | 2306.16564#60 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 61 | To generate examples using the AttrPrompt approach, we employ a distinct methodology. For each document, we consider a set of attributes, namely subtopics, technique, writing style, and length. It is important to note that, unlike multi-class classification, where an additional step is typically taken to filter out irrelevant subtopics, our multi-label classification process follows a different approach. In our method, we introduce an additional step called merging to handle the different subtopics. To determine the merging of subtopics, we utilize the fuzzy matching score11, which calculates the similarity between the subtopics. Setting the threshold at γ = 90, we proceed to merge subtopics whenever their similarity score exceeds this threshold. Consequently, specific subtopics may be associated with multiple classes, providing a more comprehensive representation of the document content. When generating text utilizing these subtopics, the corresponding labels are determined by the classes associated with the specific subtopics. This ensures that the generated examples accurately reflect the relevant classes based on the subtopic associations.
# B Implementation Details
# B.1 Hardware information
All experiments are conducted on CPU: Intel(R) Core(TM) i7-5930K CPU @ 3.50GHz and GPU: NVIDIA GeForce RTX A5000 GPUs using python 3.8, Huggingface 4.6.0 and Pytorch 1.10.
# B.2 Parameter configuration | 2306.15895#61 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 61 | 0. Part of: CEM that are structurally related to a GPRO: e.g. specific amino acid residues of a protein. 1. Regulator: CEM that clearly regulates a GPRO, but for which there is no further information on whether the regulation is direct or indirect. 2. Upregulator: CEM that increments a GPRO signal, without any insight on the mechanism. 3. Downregulator: CEM that decreases a GPRO signal, without any insight on the mechanism. 4. Agonist: CEM that binds to a receptor and alters the receptor state resulting in a biological response. 5. Antagonist: CEM that reduces the action of another CEM, generally an agonist. Many antagonists act at the same receptor macromolecule as the agonist. 6. Modulator: CEM that acts as allosteric modulator, compound that increases or decreases the action of an (primary or orthosteric) agonist or antagonist by combining with a distinct (allosteric or allotropic) site on the receptor macromolecule. 7. Cofactor: CEM that is required for a proteinâs biological activity to happen. 8. Substrate/Product: CEM that is both, substrate and product of | 2306.16564#61 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 62 | # B.2 Parameter configuration
We follow [40] to keep the parameter top_p = 1.0 and temperature t = 1.0 for calling ChatGPT APIs [37] for the training data generation part. For finetuning the classifier, we optimize the model using AdamW [28] with a linear warmup of the first 5% steps and linear learning rate decay. The maximum number of tokens per sequence is 128. Table 11 lists the hyperparameters used for AttrPrompt and SimPrompt. For the generated synthetic dataset, we stick to the strict zero-shot
9https://github.com/unitaryai/detoxify 10https://arxiv.org/ 11https://github.com/seatgeek/thefuzz
17
learning setting [30], train all the models for 6 epochs and use the model from the last epoch without using the validation set for evaluation. For the original clean dataset, we train all models for 6 epochs and use the model with the best performance on the validation set for evaluation. | 2306.15895#62 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 64 | # Table 11: Hyperparameters for fine-tuning classifiers on different tasks.
Model Link BERT-base-uncased [11] TinyBERT [19] DistilBERT-base-uncased [45] DeBERTa-v3-base [18] DeBERTae-v3-large [18] https://huggingface.co/bert-base-uncased https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D https://huggingface.co/distilbert-base-uncased https://huggingface.co/microsoft/deberta-v3-base https://huggingface.co/microsoft/deberta-v3-large SentenceBERT [42] SimCSE [14] COCO-DR [58] https://huggingface.co/sentence-transformers/all-mpnet-base-v2 https://huggingface.co/princeton-nlp/sup-simcse-bert-base-uncased https://huggingface.co/OpenMatch/cocodr-base-msmarco
Table 12: The reference of model checkpoints used in this study.
# B.3 Model checkpoint | 2306.15895#64 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 64 | SemEval: âYou are an intelligent assistant to help recognize semantic relations between pairs of nomimals. For example, tea and ginseng are in an ENTITY-ORIGIN relation in "The cup contained tea from dried ginseng.". You will be given a piece of text, and Entity 1 and Entity 2 in the text for you to classify their semantic relation. The semantic relations are in the format of "entity1-entity2". The complete semantic relation inventory is given below: 0. Cause-Effect: An event or object (entity 1) leads to an effect (entity 2). Example: those cancers (entity 2) were caused by radiation exposures (entity 1) 1. Component-Whole: An object (entity 1) is a component of a larger whole (entity 2). Example: my apartment (entity 2) has a large kitchen (entity 1) 2. Content-Container: An object (entity 1) is physically stored in a delineated area of space (entity 2). Example: a bottle (entity 2) full of honey (entity 1) was weighed 3. Entity-Destination: An entity (entity 1) is moving towards a destination (entity 2). Example: the boy (entity 1) went to bed (entity 2) 4. Entity-Origin: An | 2306.16564#64 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 65 | Table 12: The reference of model checkpoints used in this study.
# B.3 Model checkpoint
All the checkpoint used in fine-tuning and linear probing experiments are listed in table 12.
# C Additional Experimental Results
# C.1 Linear probing with generated datasets
In this section, we evaluate the generated dataset under linear probing setting. We employ four text embedding methods and present the results in Table 13. We can see that AttrPrompt consistently outperforms SimPrompt and surpasses the Gold in the NYT dataset. This further indicates the datasets generated with AttrPrompt have higher quality than SimPrompt and this superiority is robust to the model choices.
# C.2 Different temperature parameters for ChatGPT
Temperature (t) is one crucial hyperparameter of LLMs that controls the diversity of the generated text [17], while the studied attributed prompts are also for diversifying the generated data. We are then curious about the effectiveness of the temperature and how it compares to the AttrPrompt. We study different values of the temperature using the NYT dataset and present the results in Table 14. From the results, we can see that compared with the temperature, AttrPrompt brings more significant performance gain, demonstrating its superiority over temperature tuning.
# C.3 The performance with respect to long-tail classes | 2306.15895#65 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 65 | An entity (entity 1) is moving towards a destination (entity 2). Example: the boy (entity 1) went to bed (entity 2) 4. Entity-Origin: An entity (entity 1) is coming or is derived from an origin (entity 2) (e.g., position or material). Example: letters (entity 1) from foreign countries (entity 2) 5. Instrument-Agency: An agent (entity 2) uses an instrument (entity 1). Example: phone (entity 1) operator (entity 2) 6. Member-Collection: A member (entity 1) forms a nonfunctional part of a collection (entity 2). Example: there are many trees (entity 1) in the forest (entity 2) 7. Message-Topic: A message (entity 1), written or spoken, is about a topic (entity 2). Example: the lecture (entity 1) was about semantics (entity 2) 8. Product-Producer: A producer (entity 2) causes a product (entity 1) to exist. Example: a factory (entity 2) manufactures suits (entity 1) | 2306.16564#65 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 66 | # C.3 The performance with respect to long-tail classes
As we have seen in Table 2, the original training sets of the involved datasets have severe long-tail classes issue since the imbalance ratio is high, yet the generated dataset are class-balanced, we are then curious how the class balance in the generated dataset benefits the model performance on long-tail classes. We take the NYT dataset as an example and plot the per-class F1 score of Gold, SimPrompt, and AttrPrompt in Figure 10, where the x-axis is classes sorted by their number of data in the Gold dataset in descending order. From the figure, we can see that out of 26 classes, AttrPrompt renders the best per-class F1 score on 10 classes, which is 13 for Gold and 3 for SimPrompt. Notably, for classes with few examples in the Gold set (the rightmost 4 classes in the figure), AttrPrompt is better than the Gold and SimPrompt, especially for the class âabortionâ with the fewest examples. This suggests a data-centric way to handle the long-tail class issue in topic classification: one may use LLMs to
18
Table 13: Performance of linear probing with different pre-trained encoders. | 2306.15895#66 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 66 | 18
Please determine the semantic relation between entity 1 and entity 2 in the given text to the best of your knowledge. Your answer should be classified into the following categories: [Cause-Effect, Component-Whole, Content-Container, Entity-Destination, Entity-Origin, Instrument-Agency, Member-Collection, Message-Topic, Product-Producer]. â
⢠SMS: âYou are an intelligent assistant to determine if a text message is spam or not spam (ham). Your answer should be classified into the following categories: [ham, spam]. â
Response regularization prompt âYou may think step by step, articulate point by point, or make conclusion from multiple evidences, but please always state the most likely label as your answer at the very begining of your response. You are encouraged to reflect on your response, but please keep in mind that a clear answer is always desired. Try to give a clear answer at your best guess even when you are not very sure, in which case any of your conserns or explanations should go after the most likely answer to the best of your knowledge. If you are very unsure about the answer and are not willing to explicitly state any label, please say âunsureâ at the very begining of your response. â
# Task instance prompt
⢠Classification (for SMS):
âPlease classify the following example into the most likely category: [TEXT] â | 2306.16564#66 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 67 | Sequence Encoder Method Acc NYT F1 Amazon F1 Acc Reddit Acc F1 StackExchange Acc F1 BERT-CLS [11] Gold SimPrompt MetaPrompt AttrPrompt 65.39 60.69 58.43 62.38 70.26 62.26 57.05 64.17 66.73 37.95 27.52 48.18 68.76 38.07 24.43 49.46 55.73 27.41 23.93 45.03 58.00 27.60 23.99 45.95 17.17 12.27 11.40 17.33 17.68 13.71 12.60 17.64 SentenceBERT [42] Gold SimPrompt MetaPrompt AttrPrompt 81.16 84.32 87.65 88.77 84.43 85.91 87.11 88.36 80.95 67.55 67.16 68.10 83.30 68.29 65.78 69.11 80.76 69.94 79.26 71.62 81.37 70.62 69.15 71.73 55.13 45.37 45.71 46.80 54.25 47.79 47.54 48.35 SimCSE [14] Gold SimPrompt MetaPrompt AttrPrompt 78.02 78.93 78.17 81.41 80.17 78.86 78.27 81.04 73.20 | 2306.15895#67 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 67 | # Task instance prompt
⢠Classification (for SMS):
âPlease classify the following example into the most likely category: [TEXT] â
⢠Relation extraction (for CDR, ChemProt, SemEval):
âPlease classify the following example into the most likely category: [TEXT] Entity 1 [ENTITY 1] Entity 2: [ENTITY 2] â
The complete prompt for querying the LLM is
# Problem setting prompt + Response regularization prompt + Task instance prompt
# D.2 Dynamic prompting
In dynamic prompting, we query another follow-up prompt after the LLM gives the initial out-of-the- box response. As this is an extension to our main experiments, we only implemented for the CDR relation extraction task. The follow-up prompts for the two dynamic prompting strategies are:
Dynamic self-examination âAre you sure about your previous answer? If not, please give a new answer. Otherwise, please restate your previous answer. â
Dynamic self-supervision âIt is possible that the answer could be something else. Here are some evidences to help you figure out the right answer.
# EvidencesF romSupervisionF unctions(x, âλ(x))
Are you sure about your previous answer? If not, please give a new answer. Otherwise, please restate your previous answer. â | 2306.16564#67 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 68 | Gold SimPrompt MetaPrompt AttrPrompt 78.02 78.93 78.17 81.41 80.17 78.86 78.27 81.04 73.20 52.40 53.53 60.66 75.13 52.83 52.55 61.32 73.75 54.78 51.30 59.22 74.26 53.86 48.93 59.15 45.26 30.98 30.54 30.31 41.86 32.92 31.50 31.43 COCO-DR [58] Gold SimPrompt MetaPrompt AttrPrompt 80.66 84.37 84.51 86.74 82.60 84.17 84.26 86.43 76.13 55.66 57.96 60.47 78.34 55.28 56.58 62.03 79.63 63.66 65.36 66.73 80.04 64.71 65.91 66.71 61.02 43.22 44.30 45.44 63.77 41.50 42.97 43.76 Average Gold SimPrompt MetaPrompt AttrPrompt 76.31 77.08 77.19 79.83 79.37 77.80 76.67 80.00 74.25 53.39 51.54 59.35 76.38 53.62 49.84 60.48 72.47 53.95 54.96 60.65 73.42 | 2306.15895#68 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 68 | Are you sure about your previous answer? If not, please give a new answer. Otherwise, please restate your previous answer. â
EvidencesF romSupervisionF unctions(x, âλ(x)) contains evidences from all the supervision functions λj(x) ̸= 0 that are triggered by the input instance x. Examples of evidence from the supervision functions are shown below. Note that each evidence will be provided only when the corresponding supervision function is triggered.
⢠âAccording to the Comparative Toxicogenomics Database, the relation between the given chemical-condition pair is listed, confirming the answer. â
⢠âAccording to the Comparative Toxicogenomics Database, the given chemical-condition pair "[ENTITY 1]-[ENTITY 2]" is listed that the chemical actually treats the condition, so the answer that [ENTITY 1] does not induce [ENTITY 2] is confirmed. â
19
⢠âAccording to the Comparative Toxicogenomics Database, the given chemical-condition pair "[ENTITY 1]-[ENTITY 2]" is listed that the chemical is typically present with the condition, which may confirm the answer if [ENTITY 1] induces [ENTITY 2]. â
⢠âBased on the expression [INDUCE PATTERN], it is likely that [ENTITY 1] induces [ENTITY 2]. â | 2306.16564#68 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.16564 | 69 | ⢠âBased on the expression [INDUCE PATTERN], it is likely that [ENTITY 1] induces [ENTITY 2]. â
⢠âBased on the expression [NOT INDUCE PATTERN], it is not likely that [ENTITY 1] induces [ENTITY 2]. â
⢠âBased on the expression [C TREATS D PATTERN], [ENTITY 1] actually treats [ENTITY 2]. , so it is not likely that [ENTITY 1] induces [ENTITY 2]. â
⢠âBased on the expression [CLOSE MENTION PATTERN], [ENTITY 1] is closely mentioned with [ENTITY 2], so they should be closely related. â
⢠âBased on the expression [DISEASE IMPROVE PATTERN], the disease [ENTITY 2] is actually improved, so it is not likely that [ENTITY 1] induces [ENTITY 2]. â
⢠âBased on the expression [INITIAL CONDITION PATTERN], [ENTITY 2] is the initial condition of the patient(s), so it is not likely that [ENTITY 1] induces [ENTITY 2]. â
⢠âBased on the expression [UNCERTAIN PATTERN], it is uncertain that [ENTITY 1] induces [ENTITY 2]. â | 2306.16564#69 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 70 | Table 14: Study of the temperature.
t = 1.0 t = 1.5 t = 2.0 Method Acc. F1 Acc. F1 Acc. F1 SimPrompt AttrPrompt 76.00 81.30 76.34 82.26 76.78 81.47 77.31 82.23 76.55 79.47 77.42 79.86
generate class-balanced training set or augment the existing training set with the LLM-generated data such that the augmented dataset is class-balanced, the in-depth study of which is left as future work.
# g
os 50.6 toa | mm Gold mm SimPrompt Un = a Cnn 0 a
Figure 10: Per-class F1-score of the NYT dataset.
# C.4 Experiment results on other datasets
To provide a comprehensive evaluation of AttrPrompt, we conducted additional empirical studies using datasets commonly employed in prior works, which typically exhibit low cardinality. Consistent with [56, 55, 59], we focused on four specific tasks: SST-2 [49], IMDB [29], Yelp [63], and AG News [63]. The IMDB, SST-2, and Yelp datasets involve binary sentiment classification, while the AG News dataset entails news classification with only four classes. This selection allows for diverse evaluation across different classification scenarios. | 2306.15895#70 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.16564 | 70 | ⢠âBased on the expression [UNCERTAIN PATTERN], it is uncertain that [ENTITY 1] induces [ENTITY 2]. â
⢠âBased on the expression [INDUCED BY OTHER PATTERN], [ENTITY 2] is induced by other factors, so it is not likely that [ENTITY 1] induces [ENTITY 2]. â
⢠â[ENTITY 1] and [ENTITY 2] are not closely mentioned in the text, so it is not likely that [ENTITY 1] induces [ENTITY 2]. â
⢠âAccording to phrases like [WEAK EXPRESSION], there is no strong signal that [ENTITY 1] induces [ENTITY 2]. â
⢠âAccording to the text, another chemical is mentioned closer to [ENTITY 2] than [ENTITY 1], so it is not likely that [ENTITY 1] induces [ENTITY 2]. â
⢠âAccording to the text, another disease is mentioned closer to [ENTITY 1] than [ENTITY 2], so it is not likely that [ENTITY 1] induces [ENTITY 2]. â
20 | 2306.16564#70 | Automatic Calibration and Error Correction for Generative Large Language Models via Pareto Optimal Self-Supervision | Generative Large language models (LLMs) have demonstrated remarkable
capabilities for a wide range of applications, but reducing ungrounded or
erroneous responses remains a major growth area. Unlike task-specific models,
there lack an effective method to calibrate the confidence level of LLM
responses to indicate potential errors and facilitate human-in-the-loop
verification. An important source of calibration stems from expert-stipulated
programmatic supervision, which is often available at low cost but has its own
limitations such as noise and coverage. In this paper, we introduce a Pareto
optimal self-supervision framework that can leverage available programmatic
supervision to systematically calibrate LLM responses by producing a risk score
for every LLM response, without any additional manual efforts. This is
accomplished by learning a harmonizer model to align with LLM output as well as
other weak supervision sources. The model assigns higher risk scores to more
uncertain LLM responses and facilitate error correction. Experiments on
standard relation extraction and classification tasks in biomedical and general
domains demonstrate that the proposed risk score is highly correlated with the
actual LLM error rate. By using a dynamic prompting strategy based on the risk
score, we observed significant accuracy improvement for off-the-shelf LLMs,
boosting GPT-3.5 results past state-of-the-art (SOTA) weak supervision model
and GPT-4 results past SOTA supervised results on challenging evaluation
datasets. | http://arxiv.org/pdf/2306.16564 | Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon | cs.CL, stat.ML | null | null | cs.CL | 20230628 | 20231026 | [
{
"id": "1808.08485"
},
{
"id": "2302.13971"
},
{
"id": "2004.06025"
},
{
"id": "2202.05433"
},
{
"id": "1711.05101"
},
{
"id": "2203.11171"
},
{
"id": "2303.18223"
},
{
"id": "2303.08896"
},
{
"id": "2109.11377"
},
{
"id": "2201.11903"
},
{
"id": "2109.12093"
},
{
"id": "1911.10422"
}
] |
2306.15895 | 71 | To ensure a fair comparison, we utilized DistillBERT [45] as the classifier, in line with [56, 55, 13]. For the purpose of fine-tuning the classifier, we generated a total of 6000 examples, evenly distributed among the classes. For fine-tuning, we use a learning rate of 2 Ã 10â5 with 5 epochs and use the model after the final epoch for evaluation.
19
1500 8 8 8 #data (Gold) 3 8
Table 15: Results with recent baselines using DistilBERT [45]. *: Results are copied from the previous papers [56, 55, 13, 59]. | 2306.15895#71 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 72 | Table 15: Results with recent baselines using DistilBERT [45]. *: Results are copied from the previous papers [56, 55, 13, 59].
Method/Dataset IMDB SST-2 Yelp AG News Avg. Simple fine-tuning with standard cross-entropy loss SimPrompt AttrPrompt 85.64 85.92 79.38 83.64 88.39 90.60 74.30 82.74 81.93 85.73 Other results with dedicated training techniques (for reference only) Zero-shot Prompting* ZeroGen* SuperGen* ProGen* SunGen* ReGen* 77.31 82.77 85.05 87.20 89.45 85.32 82.63 80.41 84.28 84.12 89.38 87.84 78.10 87.84 89.88 89.39 89.19 89.00 73.20 76.48 77.40 80.81 80.49 80.18 77.81 81.88 84.15 85.38 87.12 85.58 | 2306.15895#72 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 73 | The results of our evaluation are presented in table 15. Overall, we observe that AttrPrompt demon- strates competitive performance across all four tasks. It outperforms our direct baseline (SimPrompt) by an impressive margin of 3.8%. Moreover, when compared to other baselines incorporating dedi- cated noisy-aware learning techniques [52, 35], AttrPrompt still achieves promising performance, despite using the standard cross-entropy loss. This suggests that in addition to leveraging noisy-robust learning objectives, enhancing the diversity of input prompts serves as an effective alternative to improving downstream performance. It is interesting yet important to combine these different learning paradigms together to further boost the performance.
# D Additional Bias Analysis
# D.1 Regional Bias with Manual Annotations | 2306.15895#73 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 74 | # D Additional Bias Analysis
# D.1 Regional Bias with Manual Annotations
In the main body of the paper, we manually annotate location for 100 samples from each of the Gold, SimPrompt, and AttrPrompt dataset. Here, we compare manual annotations with the prediction of the attribute classifier in Table 16 (the first three rows). Note that we include âunkownâ as an option in manual annotation to absorb text without clear location specifications. As per the results, there is a significant alignment between the manual annotations and the attribute classifier predictions. The accuracy is greater than 0.95 for the Gold and AttrPrompt datasets, while it is 0.82 for SimPrompt. The lower accuracy in SimPrompt is attributed to its tendency to generate more data with unknown locations.
Furthermore, we use the manually annotated data to evaluate ChatGPTâs adherence to the location attribute stipulated in the prompts. By comparing the location attribute in the prompt associated with each AttrPrompt sample against the human annotations, we calculate an accuracy of 0.96 (as shown in the last row of Table 16). This high accuracy means that ChatGPT can effectively generate news adhering to the specified location attribute in the prompts.
Table 16: Accuracies based on manual annotation of the location attribute of the NYT dataset. | 2306.15895#74 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 75 | Table 16: Accuracies based on manual annotation of the location attribute of the NYT dataset.
Comparison Data Source Accuracy Attribute Classifier v.s. Human Gold SimPrompt AttrPrompt 0.96 0.82 0.95 Attribute in Prompt v.s. Human AttrPrompt 0.96
# D.2 Biases other than regional bias
In addition to the regional bias we study in the main body of the paper, here we investigate bias in other datasets (Figure 11). From the results, we can see that the dataset generated with Gold and SimPrompt exhibit different level of biases, which shows that in general the biases exists in both
20
generated and real dataset, while dataset generated with AttrPrompt has relatively balanced attribute distribution.
Gold SimPrompt AttrPrompt worst bad average good excellent
(a) Amazon-Experience.
Gold SimPrompt AttrPrompt Gm informative/educational Mam entertaining/funny @aââ¢_ thought-provoking/discussion ME personal/storytelling Mmm help/advice | 2306.15895#75 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 76 | (b) Reddit-Style.
Gold SimPrompt AttrPrompt Ml specific Ml comparative @lââ¢_ problem-solution @@ââ¢_ troubleshooting
# M@l_tutorial-based
(c) StackExchange-Style.
Figure 11: Pie charts of predicted attribute distributions on different datasets.
# E The Prompt Format for LLM-Generated Attributes
E.1 Prompts for NYT Dataset
E.1.1 Key Attributes
Prompts to ChatGPT. What do you think are important attributes to generate some diverse news from NYT under a specific topic, such as Football, Federal Budget. Examples: subtopics, writing style...
Response from ChatGPT. Important attributes of news from the New York Times under a specific topic could include:
Subtopics: exploring different angles and subtopics within the main topic can lead to a wider range of news stories.
Writing style: varying the writing style can help to make the news stories more interesting and engaging for readers. This can include using different tones, perspectives, and sentence structures.
Sources: using a variety of sources, such as experts, eyewitnesses, and affected individuals, can provide different perspectives and add diversity to the news stories.
Geographical locations: covering the topic from different geographical locations can help to show how it affects different communities and provide a global perspective. | 2306.15895#76 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 77 | Geographical locations: covering the topic from different geographical locations can help to show how it affects different communities and provide a global perspective.
Timeliness: covering current events and breaking news related to the topic can add diversity and relevance to the news stories.
21
Multimedia: using a range of multimedia elements, such as images, videos, and infographics, can help to add variety and appeal to the news stories.
Length: varying the length of the news stories, from short and concise to longer and more in-depth, can help to provide a wider range of information and cater to different readersâ preferences.
# Selected Attributes
1. Subtopics 2. Writing Styles 3. Geographical locations (determined by six continents by default) 4. Length (determined by {30-80} and {100-150} by default)
# E.1.2 Attributes Details
Prompts to ChatGPT for Similar Classes. List 5 similar classes for {class-name} news on NYT. The set of classes is listed as: {[a list of class-names]}.
Prompts to ChatGPT for Subtopics. List 10 diverse subtopics for {class_name} news on NYT.
Response from ChatGPT. See section G for details.
E.1.3 Writing Styles
Prompts to ChatGPT. List 5 different writing styles of NYT news.
# Response from ChatGPT. | 2306.15895#77 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 78 | E.1.3 Writing Styles
Prompts to ChatGPT. List 5 different writing styles of NYT news.
# Response from ChatGPT.
1. Investigative reporting 2. Opinion/editorial 3. Human interest/profiles 4. Long-form narrative 5. Breaking news
E.2 Prompts for Amazon Dataset
E.2.1 Key Attributes
Prompts to ChatGPT. What do you think are important attributes to generate some diverse product reviews. Examples: writing style...
Response from ChatGPT. When writing a review for a product, here are five important attributes to consider:
Product Type: Clearly mention the type of product you are reviewing, such as a smartphone, laptop, or fitness tracker. This helps readers understand the category and purpose of the product.
Brand: Specify the brand of the product as it often influences quality, reputation, and customer support. Discuss the brandâs overall credibility and whether it aligns with your expectations.
User Experience: Evaluate the overall user experience of the product. Discuss its speed, accuracy, reliability, and efficiency in performing its intended tasks. Highlight any exceptional or lacking performance aspects.
Quality and Durability: Assess the quality of the product, including the materials used, construction, and overall durability. Discuss whether it feels well-made, solid, and likely to withstand regular use over time. | 2306.15895#78 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 79 | Features and Functionality: Describe the specific features and functions of the product. Highlight any unique or standout features that enhance its usability or set it apart from similar products in the market.
22
# E.2.2 Selected Attributes
1. Product Name 2. Brands 3. User Experience 4. Writing Styles 5. Length
# E.2.3 Attributes Details
Prompts to ChatGPT for Similar Classes. List 5 similar classes for {product-class} products on Amazon. The set of classes is listed as: {[a list of class-names]}.
# Prompts to ChatGPT for Product Name/Brands. List 20 diverse product names/brands for {product-class} on Amazon.
Response from ChatGPT. See section G.2 for details.
E.2.4 Usage Experience
Prompts to ChatGPT. List 5 different usage experiences of Amazon reviews.
# Response from ChatGPT.
1. Worst: A highly unsatisfactory or problematic experience. Users may encounter multiple issues, such as usability problems, errors, and frustrations.
2. Bad: A below-average experience. Users may encounter some difficulties or frustrations while interacting with the product or service, but the overall experience falls short of meeting their expectations.
3. Average: A neutral or middling experience. Users may not encounter major issues, but the experience lacks any exceptional aspects or standout features. | 2306.15895#79 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 80 | 3. Average: A neutral or middling experience. Users may not encounter major issues, but the experience lacks any exceptional aspects or standout features.
4. Good: A positive and satisfactory experience. Users have a smooth and enjoyable experience with minimal issues or obstacles.
5. Excellent: The highest level of satisfaction. Users have a seamless, delightful, and memo- rable experience.
# E.2.5 Writing Styles
Prompts to ChatGPT. List 5 different writing styles of Amazon reviews.
# Response from ChatGPT.
1. Detailed Review: A detailed review provides a comprehensive analysis of the product, covering various aspects such as features, performance, durability, and user experience. 2. Comparative Review: A comparative review compares the product with similar options
available on Amazon.
3. Pros and Cons Review: A pros and cons review presents a balanced assessment of the product by listing its positive and negative aspects.
4. Personal Experience Review: A personal experience review shares the reviewerâs firsthand experience with the product.
5. Recommendation Review: A recommendation review offers a clear recommendation or endorsement of the product.
# E.2.6 Prompt Template
See table 17 for details.
23
Table 17: Prompt template for the Amazon Product Review dataset. | 2306.15895#80 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 81 | # E.2.6 Prompt Template
See table 17 for details.
23
Table 17: Prompt template for the Amazon Product Review dataset.
Method Prompt SimPrompt Suppose you are a review writer. Please write a review for {product-class} in Amazon. AttrPrompt Suppose you are a review writer. Please write a review for {product-class} product in Amazon following the requirements below: 1. The review should be about the product of {subtopic}; 2. The brand for the product should be {brand}; 3. Should be in length between {length:min-words} and {length:max-words} words; 4. Should describe the usage experience {usage-experience} 5. The writing style of the review should be {style}; 6. the review must be relevant to {product-class} and irrelevant to: {similar-class}.
# E.3 Prompts for Reddit Dataset
# E.3.1 Key Attributes
Prompts to ChatGPT. List 5 important attributes to generate some diverse reddit posts under a specific topic, like build_a_pc, premed, etc. Examples: writing style... | 2306.15895#81 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 82 | Prompts to ChatGPT. List 5 important attributes to generate some diverse reddit posts under a specific topic, like build_a_pc, premed, etc. Examples: writing style...
Response from ChatGPT. Informative Guides and Tutorials: Create posts that provide step- by-step guides, tutorials, or informative content. These can include detailed instructions, tips, or recommendations to help readers in their journey. For example, in "build_a_pc," you could create a guide on assembling a gaming PC or optimizing system performance.
Writing Style: Utilize different writing styles to create diverse posts. For instance, you can write informative guides or tutorials, opinion pieces, comparison reviews, troubleshooting tips, or personal stories related to the topic.
Discussion Starters: Pose thought-provoking questions or statements to initiate discussions and engage the Reddit community. These can be open-ended queries, asking for opinions or experiences, or presenting a controversial viewpoint to encourage debate and dialogue.
Personal Experiences: Share personal stories or experiences related to the topic. This allows you to connect with the community on a more personal level and provide insights based on your firsthand experiences. It can include successes, challenges, lessons learned, or interesting anecdotes. | 2306.15895#82 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 83 | Resources and Recommendations: Provide valuable resources such as articles, websites, tools, or recommended products/services related to the topic. These resources can help the community access additional information, useful tools, or trusted recommendations, enhancing their understanding and engagement with the subject.
# E.3.2 Selected Attributes
1. Personal Experiences
2. Resources
3. Writing Styles
4. Length
# E.3.3 Attributes Details
Prompts to ChatGPT for Similar Classes. List 5 similar classes for {post-class} products on Reddit. The set of classes is listed as: {[a list of class-names]}.
Prompts to ChatGPT for Personal Experiences/Resources. List 20 diverse Personal Experi- ences/Resources for posts about {post-class} in Reddit.
Response from ChatGPT. See section G.3 for details.
# E.3.4 Writing Styles
Prompts to ChatGPT. List 5 different writing styles of Reddit Posts.
24
Table 18: Prompt template for the Reddit dataset. | 2306.15895#83 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 84 | # E.3.4 Writing Styles
Prompts to ChatGPT. List 5 different writing styles of Reddit Posts.
24
Table 18: Prompt template for the Reddit dataset.
Method Prompt Suppose you are a writer for posts in Reddit. Please give a post for {post-class} in Reddit. Suppose you are a review writer. Please give a post in Reddit on {post-class} community following the requirements below: 1. The post should focus on {experience}; 2. Should be in length between {length:min-words} and {length:max-words} words; 3. The writing style of the post should be {style}; 4. Should mention the resource of {resource}; 5. the post must be relevant to {post-class} and irrelevant to: {similar-class}.
# Response from ChatGPT.
1. Informative/Educational: Create posts that aim to provide valuable information or educate the community on a particular topic. Share informative articles, research findings, guides, tutorials, or explain complex concepts in a simplified manner.
2. Entertaining/Funny: Craft posts that are meant to entertain or bring humor to the community. Share funny stories, jokes, memes, amusing videos, or any form of content that can elicit laughter and enjoyment. | 2306.15895#84 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 85 | 3. Thought-Provoking/Discussion: Initiate posts that spark thoughtful discussions and invite users to share their opinions or experiences. Pose open-ended questions, present philo- sophical dilemmas, discuss current events, or explore societal issues to foster engaging conversations.
4. Personal/Storytelling: Share personal experiences, anecdotes, or narratives that others can relate to. Tell compelling stories, share achievements, challenges, or life lessons that can inspire or resonate with the community.
5. Help/Advice: Create posts where you seek help or offer advice to the community. Ask for guidance on a specific problem, seek recommendations, provide support, or share expertise in a particular field.
# E.3.5 Prompt Template
See table 18 for details.
# F Prompts for StackExchange Dataset
F.1 Selected Attributes
1. Depth of Knowledge
2. Writing Styles
3. Problem Scenarios
4. Length
# F.2 Key Attributes
Prompts to ChatGPT. What do you think are important attributes to generate diverse posts under a specific topic on the stackexchange forum, like 3D meshes, server, etc. For example, one important attribute could be writing style...
Response from ChatGPT. When generating diverse posts under a specific topic on the Stack Exchange forum, several important attributes can contribute to the variety and richness of the discussions: | 2306.15895#85 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 86 | Response from ChatGPT. When generating diverse posts under a specific topic on the Stack Exchange forum, several important attributes can contribute to the variety and richness of the discussions:
Depth of Knowledge: Participants with different levels of expertise can offer insights and perspectives from beginner to advanced, allowing for a broader range of questions and answers.
Diverse Perspectives: Encouraging participants from various backgrounds, industries, and experiences can bring fresh viewpoints and approaches to the topic.
25
Objectives: Encouraging participants to state their objectives or goals in their posts can help generate diverse content. Some posts may aim to seek advice, share experiences, provide step-by-step guides, analyze specific problems, or discuss broader concepts.
Problem Scenarios: Encourage users to discuss real-world problem scenarios related to the topic. This helps generate diverse posts by exploring different use cases, challenges, and approaches to problem-solving.
Problem Solving: Promoting problem-solving skills and encouraging users to share their challenges, roadblocks, and solutions can foster a collaborative and dynamic environment.
Clear Communication: Emphasizing clear and concise communication helps to ensure that questions are well-articulated and that answers are easily understood by a wide range of users.
# F.3 Prompts for arXiv Dataset
# F.3.1 Key Attributes | 2306.15895#86 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 87 | # F.3 Prompts for arXiv Dataset
# F.3.1 Key Attributes
Prompts to ChatGPT. What do you think are important attributes to generate diverse papers under a specific topic on arXiv. For example, one important attribute could be writing style...
Response from ChatGPT. Generating diverse papers under a specific topic on platforms like arXiv involves considering various important attributes. Here are a few attributes that can contribute to generating diverse papers:
Writing Style: Encouraging papers with different writing styles, such as technical, expository, theoretical, or empirical, can bring diversity to the presentation and appeal to a wider range of readers.
Subtopics: Promoting papers that explore different subtopics within the broader topic can provide comprehensive coverage and delve into specific areas of interest.
Techniques: Encouraging papers that employ different research methodologies, such as experimental, computational, or analytical, can bring diverse approaches to studying the topic.
Data Sources: Promoting papers that utilize diverse data sources, such as surveys, simulations, real-world datasets, or case studies, can offer different perspectives and insights into the topic.
Interdisciplinary Perspectives: Encouraging papers that incorporate interdisciplinary perspectives, drawing insights from multiple fields or combining methodologies from different disciplines, can contribute to a richer understanding of the topic.
# F.3.2 Selected Attributes
1. Subtopics 2. Techniques 3. Writing Styles 4. Length | 2306.15895#87 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 88 | # F.3.2 Selected Attributes
1. Subtopics 2. Techniques 3. Writing Styles 4. Length
Prompts to ChatGPT for Subtopics/Techniques. List 20 diverse subtopics/techniques for paper about {paper-class} in arXiv.
Response from ChatGPT. See section G.5 for details.
F.3.3 Writing Styles
Prompts to ChatGPT. List 5 different writing styles of arXiv papers.
# Response from ChatGPT.
1. Experimental: This style focuses on presenting empirical research with a strong emphasis on experimental design, data collection, and statistical analysis.
2. Theoretical: This style emphasizes the development of new theoretical frameworks or conceptual models.
26
Table 19: Prompt template for the arXiv multi-label dataset.
Method Prompt SimPrompt Please give a synthetic paper abstract about {paper-classes}. AttrPrompt Please write an abstract of a {paper-classes} paper in arXiv following the requirements below: 1. Should focus on the {subtopic}; 2. Should be in length between {length:min-words} and {length:max-words} words; 3. The paper should use the technique relevant to {technique}; 4. The style of the paper should be in {style}; | 2306.15895#88 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 89 | 3. Review: This style involves synthesizing and analyzing existing literature on a specific topic. 4. Case Study: This style presents an in-depth examination of a specific case or situation, aiming to provide insights into complex phenomena or phenomena that are context-specific.
# F.3.4 Prompt Template
See table 19 for details.
# F.4 Prompts for AG News Dataset
The prompts are the same as the NYT dataset. The only difference is the subtopics for different labels. The subtopic information is listed in G.6 for details.
# F.5 Prompts for Yelp/SST-2/IMDB Dataset
# F.5.1 Key Attributes
Prompts to ChatGPT. What do you think are important attributes to generate some diverse reviews with sentiments? E.g. writing styles.
Response from ChatGPT. Writing Styles: Utilizing different writing styles such as descriptive, narrative, persuasive, or humorous can create a diverse range of tones and emotions in the reviews.
Language and Tone: Varying the language and tone used in the reviews can evoke different sentiments. This can include formal or casual language, positive or negative tone, or even incorporating elements of storytelling.
Personal Experience: Sharing personal experiences and anecdotes in the reviews can add authenticity and emotional depth, allowing readers to connect with the sentiments expressed. | 2306.15895#89 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 90 | Personal Experience: Sharing personal experiences and anecdotes in the reviews can add authenticity and emotional depth, allowing readers to connect with the sentiments expressed.
Descriptive Details: Including specific and vivid descriptions of the ambiance, decor, flavors, smells, or textures of the food can create a sensory experience that evokes various sentiments.
# F.5.2 Selected Attributes
1. Personal Experience 2. Descriptive Details 3. Writing Styles 4. Length
# F.5.3 Personal Experiences/Descriptive Details
Prompts to ChatGPT. Details are listed in Appendix G.6 and G.8.
F.5.4 Writing Styles
Prompts to ChatGPT. List 5 different writing styles of Restaurant Reviews (for Yelp).
# Response from ChatGPT.
1. Descriptive Style: Focuses on providing detailed descriptions of the restaurantâs ambiance, decor, and overall atmosphere. It aims to paint a vivid picture of the dining experience.
27
Table 20: Prompt template for the Yelp dataset. | 2306.15895#90 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 91 | 27
Table 20: Prompt template for the Yelp dataset.
Method Prompt SimPrompt Suppose you are a review writer. Please give an example of a {sentiment} review in Yelp. AttrPrompt Please write a {sentiment} review for a restaurant, following the requirements below: 1. The overall review should be {sentiment}; 2. It should be a {descriptive-details} restaurant ; 3. Should focus on {personal-experience}; 4. Should be in length between {length:min-words} and {length:max-words} words; 5. The writing style of the review should be {style};
Table 21: Prompt template for the SST-2/IMDB dataset.
Method Prompt Suppose you are a review writer. Please give an example of a {sentiment} review for a movie. Please write a {sentiment} review for a movie, following the requirements below: 1. The overall review should be {sentiment}; 2. It should be a {descriptive-details} movie ; 3. Should focus on {personal-experience}; 4. Should be in length between {length:min-words} and {length:max-words} words; 5. The writing style of the review should be {style}; | 2306.15895#91 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 92 | 2. Personal Narrative Style: Incorporates personal anecdotes and experiences from the visit to the restaurant. It adds a personal touch to the review and allows the reader to connect with the reviewerâs perspective.
3. Analysis Style: Takes a more analytical approach by evaluating various aspects of the restaurant, such as food quality, service, cleanliness, and value for money.
4. Comparative Style: Compares the restaurant to similar establishments in terms of cuisine, price range, service quality, or other relevant factors.
5. Gastronomic Style: Focuses primarily on the food and beverage offerings of the restaurant.
Prompts to ChatGPT. List 5 different writing styles of Movie Reviews (for IMDB/SST-2).
# Response from ChatGPT.
1. Descriptive: Focuses on providing a detailed and vivid description of the movie and its elements.
2. Analytical: Analyzes and evaluates various aspects of the movie, such as the plot, acting, cinematography, and direction.
3. Persuasive: Aims to convince the reader to watch or avoid the movie by presenting strong arguments and opinions.
4. Comparative: Compares the movie to other films within the same genre or by the same director.
# F.5.5 Prompt Template
See table 20 and 21 for details.
# G Details for LLM-Generated Attributes | 2306.15895#92 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 93 | # F.5.5 Prompt Template
See table 20 and 21 for details.
# G Details for LLM-Generated Attributes
In this section, we present a brief description of LLM-Generated Attributes. For a comprehen- sive list of attributes, please refer to the GitHub repository: https://github.com/yueyu1030/ attrprompt.
# G.1 NYT Attributes
# G.1.1 Subtopics
We randomly select 3 categories in NYT dataset and display the corresponding subtopic attributes for each category:
28
astronomy:
â Discoveries of exoplanets â Black holes and their role in shaping galaxies â The search for extraterrestrial life â Gravitational waves and the study of the universeâs origins â The use of telescopes to explore the universe â The mysteries of dark matter and dark energy â Solar flares and their impact on Earth â The history of the universe and its evolution over time â Exploring the possibility of space tourism â The exploration of our neighboring planets, such as Mars and Venus.
⢠baseball: | 2306.15895#93 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 94 | ⢠baseball:
â Recent controversy surrounding sign-stealing scandal in MLB â Breakdown of top prospects in minor league baseball â Analysis of new rule changes for upcoming baseball season â Coverage of recent World Series champions and their success â In-depth profile of influential baseball figures, such as managers or players â Updates on retired players and their post-baseball careers â Highlighting standout performances by individual players or teams in recent games â Coverage of international baseball leagues and their top players
economy:
â Job market and employment rates â Interest rates and monetary policy â Inflation and deflation â Economic growth and GDP â Consumer spending and retail sales â Income inequality and poverty â GDP growth and contraction â Labor market trends â Economic impacts of natural disasters and pandemics â Housing market and real estate
# G.2 Amazon Attributes
# G.2.1 Product Brands
We randomly select 3 categories in Amazon dataset and display the corresponding product brand attributes for each category:
camera_photo.: â Canon â Nikon â Leica â Hasselblad â Fujifilm â Lumix (Panasonic) â GoPro â Polaroid â Yashica â Mamiya
health_and_personal_care.:
29
â Johnson & Johnson â Dove â Colgate â Head & Shoulders â Merck â Pfizer â Merck â Abbott Laboratories â GlaxoSmithKline
toys_games.: | 2306.15895#94 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 95 | 29
â Johnson & Johnson â Dove â Colgate â Head & Shoulders â Merck â Pfizer â Merck â Abbott Laboratories â GlaxoSmithKline
toys_games.:
â Mattel â Fisher-Price â Hasbro â Lego â Nerf â Barbie â Hot Wheels â Playmobil â MGA Entertainment â Paw Patrol
# G.2.2 Product Names
We randomly select 3 categories in Amazon dataset and display the corresponding product name attributes for each category:
camera_photo.:
â SnapShooter â FocusReady â ClickCapture â MemoriesMatter â FlashFinder â PicturePerfect â ShotSnap â VisionVibe â PixelPal â FreezeFrame â AngleAttack â SnapStash â FrameFlair â SmileSnaps â ImageImpact â ZoomZest â ClickCraze â PixelPassion â ShootSmart â CaptionChamp.
baby.:
# â Baby Swing â Diaper Genie â Milk Warmer
30
â Baby Carrier â Car Seat â Baby Monitor â Baby Food Maker â Nursery Glider â Changing Table â Baby Bouncer â Playpen â Teething Rings â Baby Wipes Dispenser â Baby Bath Tub â Bibs â Baby Blankets â Pacifier Clip â Baby Sling â Baby Napper â Moses Basket
outdoor_living.: | 2306.15895#95 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 96 | outdoor_living.:
â Sunbrella â Weber â Keter â Trex â Solaira â Tropitone â Bimini Solutions â La-Z-Boy Outdoor â Suncast â Beltwood â Quikrete â Cosco Outdoor Living â Anova Furnishings â Duramax â US Polymers â Ostrich Chairs â Carefree of Colorado â Tuff Coat â Fire Sense â Heritage Patios.
# G.3 Reddit Attributes
# G.3.1 Resources
We randomly select 3 categories in Reddit dataset and display the corresponding resource attributes for each category:
⢠buddy_crossing.:
â Meetup.com - a website that allows you to find and join groups of individuals with similar interests in your local area, including hiking, book clubs, and social events. â The Buddy System: Understanding Mental Illness and Addiction - a book that explores the biology of addiction and provides a guide for friends and family members of individuals struggling with these issues.
31
â Lynda.com - a subscription-based online learning platform that provides courses on a variety of subjects including computer programming, business, web design, and more. â Codecademy.com - an interactive online platform that teaches coding skills for free or
through a subscription.
â Khan Academy - a nonprofit organization that provides free online courses in a wide range of subjects including math, science, and humanities to learners of all ages. | 2306.15895#96 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 97 | through a subscription.
â Khan Academy - a nonprofit organization that provides free online courses in a wide range of subjects including math, science, and humanities to learners of all ages.
â Duolingo - a language-learning app that is available for free on the App Store and Google Play, offering courses in a variety of languages including Spanish, French, and German.
â MindBody App - a mobile app that helps users find and book local fitness, wellness, and beauty activities such as yoga classes, massages, and haircuts.
â Headspace - a meditation app that offers guided meditation courses to help users reduce stress and improve focus.
â The Knot - a website that provides tools and resources for wedding planning, including a Wedding Website Builder, guest list tracker, and registry management.
â Khan Academy - a nonprofit organization that provides free online courses in a wide range of subjects including math, science, and humanities to learners of all ages.
# â Others resource for buddy_crossing.
whats_that_book.:
â Goodreads - A social platform for book lovers where users can search for books, create bookshelves, and write reviews.
â LibraryThing - A community-driven cataloging website where users can create and share their personal book collections.
â AbeBooks - An online marketplace for rare and out-of-print books, as well as other antique or collectible items. | 2306.15895#97 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 98 | â AbeBooks - An online marketplace for rare and out-of-print books, as well as other antique or collectible items.
â Shelfari - An online book club where users can share book recommendations and read reviews from others.
â Project Gutenberg - A digital library of freely available public domain books. â Paperback Swap - A book trading community where users can exchange books with
others across the US.
â Goodreads Librarians Group - A community of Goodreads users who help with book cataloging, including identifying books from incomplete information.
â Book Riot - A website featuring book reviews and book-related news, with an emphasis on diverse and underrepresented voices.
â The New York Times Book Review - A renowned weekly publication featuring book reviews, author interviews, and literary criticism.
# â Others resource for whats_that_book.
pokemongo_friends.:
â Pokemon GO Hub: A comprehensive website dedicated to news, guides, and analysis on Pokemon GO.
â The Silph Road Subreddit: A community-run subreddit dedicated to research and analysis of Pokemon GO mechanics.
â Poke Assistant: A website that offers a range of tools to help you optimize your Pokemon GO experience, including IV calculators and gym battle simulations.
â The Trainer Club: A YouTube channel that provides daily updates, news, and tips for Pokemon GO trainers. | 2306.15895#98 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 99 | â The Trainer Club: A YouTube channel that provides daily updates, news, and tips for Pokemon GO trainers.
â Gotta Catch âEm All: A Facebook group where you can connect with other Pokemon GO players and coordinate raid battles and other activities.
â Redditâs r/PokemonGOFriends Subreddit: A community of players looking for friends to exchange gifts and share invites for raids.
â The PokeMap: A website that allows you to find nearby Pokemon on a map in real-time. â Poke Genie: An app that automatically calculates IVs and other stats for your Pokemon,
saving you time and headaches.
32
â Pokemon GO Gamepress: A website that offers detailed breakdowns and analysis of Pokemon, movesets, and other game mechanics.
â The Go Ranger App: An app that helps you plan your raids and battles, with intuitive mapper tools and filters to help you find the Pokemon youâre looking for.
# â Others resource for pokemongo_friends.
# G.3.2 Experience
We randomly select 3 categories in Reddit dataset and display the corresponding experience attributes for each category:
⢠build_a_pc.:
â DIY PC Builds: Sharing personal experiences and success stories of building custom PCs, discussing component choices, troubleshooting, and performance optimizations. â Budget-Friendly Builds: Discussing experiences with building PCs on a tight budget,
sharing cost-saving tips, and recommendations for budget-friendly components. | 2306.15895#99 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 100 | sharing cost-saving tips, and recommendations for budget-friendly components.
â Cable Management: Sharing personal experiences and tips for effective cable manage- ment in PC builds, discussing cable routing techniques and showcasing clean build aesthetics.
â RGB Lighting: Discussing experiences with RGB lighting setups in PC builds, sharing recommendations for RGB components, software customization, and lighting effects. â Troubleshooting Builds: Sharing experiences and tips for troubleshooting common issues in PC builds, helping fellow builders diagnose and solve hardware or software problems.
â Silent and Quiet PC Builds: Discussing experiences and recommendations for building silent or quiet PCs, focusing on noise reduction techniques and quiet component choices.
â Workstation Builds: Sharing experiences and insights into building PCs for professional workloads, such as video editing, 3D rendering, programming, and graphic design. â Water-Cooling Adventures: Sharing experiences and insights into custom water-cooling loops, discussing the challenges, benefits, and performance improvements achieved. â Unique and Custom Builds: Showcasing and discussing unique and custom PC builds,
including themed builds, custom cases, or exotic cooling solutions.
â Build Planning and Component Selection: Discussing experiences with planning PC builds, researching and selecting components, considering compatibility, and balancing performance and budget. | 2306.15895#100 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 101 | â Build Planning and Component Selection: Discussing experiences with planning PC builds, researching and selecting components, considering compatibility, and balancing performance and budget.
â Modding and Case Customization: Sharing experiences with PC case modding and customization, discussing techniques, materials, and showcasing personal projects. â Compact and Small Form Factor Builds: Discussing experiences with building compact or small form factor PCs, sharing recommendations for mini-ITX cases, cooling solutions, and component choices.
â Home Server and NAS Builds: Sharing experiences and insights into building home servers and network-attached storage (NAS) systems, discussing storage options, soft- ware, and data management.
â Multimonitor Setups: Discussing experiences with multimonitor setups, sharing tips for optimizing productivity and gaming experiences across multiple displays.
â PC Gaming Peripherals: Sharing experiences and recommendations for gaming pe- ripherals, such as keyboards, mice, monitors, and headsets, discussing features and personal preferences.
xbox.:
â Xbox One exclusive games such as Halo 5, Forza Horizon 4, and Gears of War 4 â Xbox One media and entertainment apps such as Netflix and Hulu â memorable gaming moments or achievements on the Xbox console. â Purchase Xbox One online. â Xbox Kinect motion sensor accessory
33
# â Xbox Play Anywhere program â Other Experience of Xbox
metal_gear_solid.: | 2306.15895#101 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 102 | 33
# â Xbox Play Anywhere program â Other Experience of Xbox
metal_gear_solid.:
â Tactical Weapon Customization: Experimenting with various weapons, attachments, and equipment to tailor loadouts to different mission objectives and playstyles.
â Character Development: Witnessing the growth and development of iconic characters such as Solid Snake, Big Boss, or Raiden throughout their respective story arcs.
â Stealthy Takedowns: Executing silent and non-lethal takedowns, utilizing tranquilizer darts, chokeholds, or sneaking up on enemies from behind.
â Tactical Planning: Strategizing mission approaches, analyzing enemy patrols, setting
traps, and utilizing distractions to gain the upper hand.
â Memorable Characters: Developing connections with unique and memorable characters, such as Otacon, Meryl, Revolver Ocelot, or The Boss.
â Stealthy Gadgets: Utilizing gadgets and tools, such as the cardboard box, thermal goggles, or the iconic Solid Eye, to gain advantages during missions.
â Emotional Storytelling: Experiencing emotionally impactful moments within the narrative, exploring themes of loss, betrayal, loyalty, and the human cost of warfare. â Espionage Tactics: Participating in undercover missions, gathering intelligence, infiltrating enemy bases, and sabotaging their operations. | 2306.15895#102 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 103 | â Lore and Mythology: Delving into the intricate lore, conspiracies, and historical events within the Metal Gear Solid universe, including topics like The Patriots or the Philosopherâs Legacy.
â Groundbreaking Game Design: Appreciating the innovative gameplay mechanics, cinematic presentation, and attention to detail that have made the Metal Gear Solid series a beloved and influential franchise in the gaming industry.
# â Others experience for metal_gear_solid.
# G.4 StackExchange Attributes
# G.4.1 Scenarios
We randomly select 3 categories in StackExchange dataset and display the corresponding scenario attributes for each category:
⢠multiplayer.:
â Cheating/hacking in online games â Inappropriate player behavior â Unbalanced game mechanics â Difficulty connecting to multiplayer servers â Matchmaking errors â Unresponsive or laggy gameplay â Glitches in gameplay affecting online matches â Difficulty finding players to match with â Balancing player skills in matchmaking â Disconnects and dropped connections mid-game â Cross-platform compatibility issues â In-game communication problems â Difficulty managing and moderating game servers â Addressing griefing and trolling in multiplayer games â Managing player accounts and login systems â Implementing or improving anti-cheat measures â Community feedback and feature requests â Addressing game-breaking exploits â Ensuring fair and accurate reporting of player statistics
34
# â Addressing server crashes and downtime.
procedural-generation.: | 2306.15895#103 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 104 | 34
# â Addressing server crashes and downtime.
procedural-generation.:
â Improving the efficiency of procedural generation algorithms in Python. â Troubleshooting issues with memory usage in large-scale procedural generation
projects.
â Debugging issues with randomized content generation in procedural levels. â Implementing procedural generation techniques in C++ for game development. â Exploring the potential of machine learning in procedural content generation. â Optimizing the generation of complex 3D models using procedural algorithms. â Managing complexity and maintaining consistency in procedurally generated game
worlds.
â Addressing issues with procedural generation of text-based content, such as dialogue or item descriptions.
â Developing tools to aid in the creation and testing of procedural generation algorithms. â Balancing the need for randomness with player expectations for fairness and balance. â Addressing issues with the procedural generation of music and sound effects. â Improving the visual quality of procedurally generated game assets. â Exploring ethical concerns around the use of AI in procedural content generation. â Developing procedural generation techniques for non-linear narratives or branching
storylines.
â Improving the procedural generation of inhabited environments, such as procedurally generated NPCs.
â Addressing issues with the procedural generation of terrain features such as rivers and mountains.
â Implementing procedural generation techniques for user-generated content. â Supporting multithreaded execution in procedural generation algorithms. â Ensuring procedural generation techniques are compatible with various game engines
and frameworks. | 2306.15895#104 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 105 | â Implementing procedural generation techniques for user-generated content. â Supporting multithreaded execution in procedural generation algorithms. â Ensuring procedural generation techniques are compatible with various game engines
and frameworks.
â Improving the scalability of procedural generation algorithms for use in multiplayer
# games. ⢠networking.:
â Difficulty in troubleshooting network connection issues on a Linux system â Configuring a wireless access point for a large office space â Implementing load balancing across multiple servers in a network â Optimizing network throughput to reduce latency in a gaming environment â Implementing firewall rules to block unauthorized access to a network â Troubleshooting DNS resolution issues on a Windows server â Designing and implementing a secure VPN connection â Setting up a network file server for shared access among multiple clients â Configuring SNMP to monitor network traffic and utilization â Designing a network topology for a large enterprise with multiple locations â Troubleshooting issues with Ethernet switches in a data center environment â Implementing QoS to prioritize network traffic for critical applications â Configuring NAT and PAT to enable internet access for multiple devices â Setting up and configuring VLANs to segment a network â Troubleshooting issues with network printers in an office environment â Configuring routing protocols in a large network â Securing wireless access points to prevent unauthorized access â Troubleshooting issues with VPN connection stability and speed â Implementing network virtualization with virtual LANs and virtual switches â Designing and implementing an effective network security strategy to prevent data
breaches.
35 | 2306.15895#105 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.