doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.11698
312
Figure 26: Examples of fairness evaluation in the zero-shot and few-shot settings. The green dialogue box refers to the user input; the yellow dialogue box refers to user-provided example responses as few-shot examples; the red dialogue box refers to the real responses from GPT-3.5 and GPT-4. Table 30: Accuracy (ACC (%)), demographic parity difference (Mdpd), and equalized odds difference (Meod) on Adult test set with different base rate parity bPt in zero-shot setting. Model bPt = 0.5 ACC ↑ Mdpd ↓ Meod ↓ ACC ↑ Mdpd ↓ Meod ↓ ACC ↑ Mdpd ↓ Meod ↓ bPt = 0.0 bPt = 1.0 GPT-3.5 GPT-4 63.5 72.5 0.17 0.21 0.20 0.26 68.5 78.5 0.31 0.47 0.19 0.40 73.0 85.5 0.46 0.71 0.49 0.95 # 10.1 Metrics of fairness
2306.11698#312
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11296
313
MOF Chatbot: Sure! Here's a detailed synthesis procedure for ZIF-8: 1. Dissolve Zn(NO3)2·6H2O in DMF (20 mL) to create a solution. 2. Add 2-methylimidazole (0.5 g) to the solution. 3. Mix the solution thoroughly. 4. Transfer the solution to a Teflon-lined autoclave (23 mL). 5. Seal the autoclave and heat it at 85°C for 24 hours. 6. Allow the autoclave to cool to room temperature. 7. Remove the resulting white precipitate by centrifugation or filtration. 8. Wash the precipitate with solvents such as DMF to remove any impurities. 9. Dry the resulting white powder at 60°C under vacuum for 12 hours.
2306.11296#313
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11698
313
# 10.1 Metrics of fairness We first introduce the definitions of fairness metrics used to evaluate the fairness of model predictions, test data, and few-shot training examples. Suppose that we have n data samples {(X, Y, A) }"_, with features X € 4, labels Y € Y := {0,1}, and a sensitive attribute A € {0,1} drawn from the distribution Py y. Note that the sensitive attribute A is also included in the feature vector X. Let f : ¥ + Y represent a machine learning model. We adopt the metric of demographic parity difference Mapa to evaluate model prediction fairness: # Mapa = |P(x,¥,4)~Pxy Lf(X) = YA = 1] = (.Y.A)~Pxy [f(X) = 1A = 0] The demographic parity difference measures the difference between the probability of positive predictions conditioned on sensitive attribute A = 1 and that conditioned on A = 0. A large demographic parity difference Mdpd means that there is a large prediction gap between the groups with A = 1 A = 0, indicating the unfairness of the model prediction. Since the demographic parity difference does not consider the ground truth label, we also consider the metric of equalized odds difference Meod [69] to evaluate model prediction fairness: Meod = max {MT P , MF P } (2)
2306.11698#313
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11296
314
Caution: - DMF is toxic and should not be inhaled or ingested. Handle it in a well-ventilated area with protective equipment such as gloves and goggles. - The autoclave should be sealed tightly to prevent leakage or explosion during heating. - The resulting white powder should be handled with care to avoid inhaling or exposing it to moisture, as it can easily absorb water vapor from the air and decompose. Figure S71. A conversation with the MOF Chatbot based on gpt-3.5-turbo model. 80 # Human-MOF Chatbot Interaction # Human: How to make Zirconium MOF? MOF Chatbot:
2306.11296#314
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11698
314
Meod = max {MT P , MF P } (2) where MT P denotes the true positive equalized odds difference: # Mrp = |Pcx,y,aynpxy U(X) = UY = 1,4 = 0) - (~y,A)wpxy [f(X) = UY =1,A4=1| and MF P denotes the false positive equalized odds difference: # Mrp = |Px,y,a)~Pxy If(X) = 1Y =0,A = 0) - Pix,yajwpxy If(X) = UY =0,A=1] A large equalized odds difference Meod demonstrates a large prediction gap conditioned on different values of the sensitive attribute, and therefore indicates the unfairness of the model prediction. To evaluate the demographical balance (fairness) of the data distribution, we adopt the base rate parity bP for distribution P in [209, 85]: bP = P(X,Y,A)∼PXY [Y = 1|A = 1] − P(X,Y )∼PXY A[Y = 1|A = 0] 54 () (3) (4)
2306.11698#314
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11296
315
MOF Chatbot: There are different ways to make Zirconium MOFs, and the specific method may depend on various factors such as the linker and modulator used. However, based on the information available from the papers I have reviewed, I can provide two examples: 1. Zr-CAU-30: Zr-CAU-30 was synthesized using Ni-H8TPPP (linker) and NaF/NaOH (modulators) in H2O solvent at 160°C for 3 hours under stirring. ZrOCl2·8H2O (metal source) and Ni-H8TPPP were mixed in H2O solvent, and then NaF and NaOH were added to the mixture. The mixture was heated and stirred until the reaction was completed. The product was obtained as a highly porous metal phosphonate. 2. Zr-CAU-28: Zr-CAU-28 was synthesized using H2FDC (linker) and acetic acid (modulator) in H2O solvent at 80°C for 1 hour under microwave-assisted heating. ZrOCl2·8H2O (metal source) and H2FDC were mixed in H2O solvent, and then acetic acid was added to the mixture. The mixture was heated under microwave irradiation until the reaction was completed. The product was obtained as a white crystalline powder.
2306.11296#315
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11698
315
54 () (3) (4) Table 31: Accuracy (ACC (%)), demographic parity difference (Mdpd), and equalized odds difference (Meod) on the Adult dataset using few-shot examples with different base rate parity bPc in the 32-shot learning. The base rate parity of the test set bPt is fixed as 0.0 to demonstrate the bias induced by the context. Model bPc = 0.5 ACC ↑ Mdpd ↓ Meod ↓ ACC ↑ Mdpd ↓ Meod ↓ ACC ↑ Mdpd ↓ Meod ↓ bPc = 0.0 bPc = 1.0 GPT-3.5 GPT-4 61.5 72.0 0.033 0.10 0.057 0.12 69.5 78.5 0.026 0.11 0.062 0.14 70.5 79.0 0.12 0.28 0.20 0.34 Table 32: Demographic parity difference (Mdpd) and equalized odds difference (Meod) with different sensitive attributes on the Adult dataset with test base rate parity bPt = 0.0 in the zero-shot setting.
2306.11698#315
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
316
Model Race Mdpd ↓ Meod ↓ Mdpd ↓ Meod ↓ Mdpd ↓ Meod ↓ Sex Age GPT-3.5 GPT-4 0.17 0.21 0.20 0.26 0.14 0.16 0.17 0.28 0.09 0.14 0.15 0.20 A large base rate parity bP reflects the bias of the data distribution regarding a given sensitive attribute A, indicating that the data distribution P is biased and demographically imbalanced. In the evaluation, we consider both the base rate parity of data distribution in the context of few-shot training examples bPc and the base rate parity of the test set bPt. # 10.2 Fairness evaluation in zero-shot setting
2306.11698#316
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11296
317
MOF Chatbot: MOF Chatbot: In MOF synthesis, a modulator is a molecule that can promote or inhibit crystal growth and control the morphology of the MOF crystals. In the synthesis of Zr-CAU-28 as described in the paper, acetic acid was used as a modulator. The addition of acetic acid likely helped to increase the yield of the desired MOF product, as well as promote the formation of smaller and more uniform crystals. If you choose not to add acetic acid during the synthesis of Zr-CAU-28, the outcome of your results may differ. Without a modulator, the crystal growth may be less controlled, leading to the formation of larger or less uniform crystals. Additionally, the yield of the desired MOF product may be lower. It is important to note, however, that the impact of omitting acetic acid may depend on the specific MOF synthesis conditions and may not always have a significant effect on the final product. Human: That's interesting. In the case of synthesizing other MOFs, have you come across any alternative modulators that could potentially be used in place of acetic acid? MOF Chatbot:
2306.11296#317
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11698
317
In this part, we aim to explore (1) whether the predictions of GPT models are fair regarding different sensitive attributes, and (2) whether there is a fairness gap between the predictions of GPT-3.5 and GPT-4 in the zero-shot setting. Evaluation setup. We select a commonly used fairness evaluation dataset Adult [15], transform the tabular data into language descriptions, and ask GPT models to perform classification tasks on it. The Adult dataset includes 14 attributes of a person (e.g., age and education level) as input. The task is to predict whether the income of the person is over $50k per year. We evaluate the fairness of GPT models on multiple sensitive attributes, including sex, race, and age. Specifically, we have “male” and “female” for the attribute of sex, “white” and “black” for race, and “above average” and “below average” for age. One of the tabular data is: “‘A person in 1996 has the following attributes: age: 19, workclass: private, final weight: 192773, education: HS-grad, highest education level: 9, marital status: never-married, occupation: other-service,
2306.11698#317
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11296
318
MOF Chatbot: MOF Chatbot: While the paper did not mention any other modulators used in the synthesis of Zr-CAU-28, other papers have used different modulators for synthesizing Zr-based MOFs. For example, in the synthesis of Zr-CAU-30, the modulators used were NaF and NaOH. However, the choice of modulator may impact the outcome of the results, so it is recommended to refer to other papers and perform some preliminary experiments to determine the most suitable modulator for the specific synthesis protocol and desired MOF properties. Figure S72. A conversation with the MOF Chatbot based on gpt-3.5-turbo model. 81 # Human-MOF Chatbot Interaction Human: Can you tell me how to synthesize MOF-499? MOF Chatbot: MOF Chatbot: Based on the information available from the MOF papers I have read so far, I cannot provide a reliable answer to this question. Please provide more information, or the specific paper reference, on the synthesis of MOF-499. Human: Oops, my bad. There was a typo. I actually meant MOF-419. MOF Chatbot:
2306.11296#318
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11698
318
workclass: private, final weight: 192773, education: HS-grad, highest education level: 9, marital status: never-married, occupation: other-service, relationship: own-child, race: white, sex: female, capital gain: 0, capital loss: 0, hours per week: 24, native country: United-States”’. An illustrative example of evaluations on GPT models is shown in Figure 26. To show whether there is a large performance gap across different groups regarding the sensitive attribute, we evaluate GPT models on the test set with different base rate parity bPt. We select 200 samples to construct the test set. Results. Table 30 indicates that both GPT-3.5 and GPT-4 demonstrate fairness issues. The accuracy and unfairness scores (i.e., Mdpd and Meod) between the groups with sensitive attribute A = 0 and A = 1 increase significantly as the test set becomes more and more demographically imbalanced (i.e., a larger base rate parity bPt). Specifically, with a large base rate parity bPt, the constructed test set would be more biased towards Y = 1 (e.g., high salary) conditioned on A = 1 (e.g., male), and therefore a higher accuracy with a
2306.11698#318
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11296
319
MOF Chatbot: MOF Chatbot: To synthesize MOF-419, you will need to use Y(NO3)3 (0.5 mmol) as the metal source, H2BDH (benzene-1,4-dihydroxamic acid, CAS number: 20073-80-7) (0.5 mmol) as the linker, and formic acid (100 mmol) as the modulator. The reaction should be carried out in H2O (12 mL) solvent in a 20 mL scintillation vial at 100°C for 12 hours. The product should be obtained as single crystals. If you have any further questions, feel free to ask.. Human: Thanks for the explanation. But, how about the post-synthesis steps? What should | do after | get the MOF crystals? Human: Thanks for the explanation. But, how about the post-synthesis steps? What should I do after I get the MOF crystals? MOF Chatbot:
2306.11296#319
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11698
319
be more biased towards Y = 1 (e.g., high salary) conditioned on A = 1 (e.g., male), and therefore a higher accuracy with a larger bPt indicates the model’s bias towards the group with A = 1. Furthermore, we find that GPT-4 consistently achieves higher accuracy than GPT-3.5 even under biased test distribution, indicating a trade-off between prediction accuracy and fairness. We also evaluate the fairness of GPT models under different sensitive attributes, including sex, race, and age. Table 32 shows similar observations for different sensitive attributes, while the unfairness issues of GPT models are more severe for certain sensitive attributes such as sex and race.
2306.11698#319
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11296
320
MOF Chatbot: Unfortunately, the paper I have reviewed did not provide information regarding any post-synthesis steps. However, depending on the intended application of the MOF, various post-synthesis modifications are possible including activation, doping, functionalization, and more. Activating the MOF typically involves removing any solvent or guests trapped within the pores of the MOF through solvent exchange or thermal treatment. Doping and functionalization can be achieved through various methods including deposition, exchange, and covalent attachment. The exact post-synthesis steps required for MOF-419 may depend on the intended application and should be further researched. Human: Could you provide me with the exact title of the paper that discusses MOF-419 so that I can have a check by myself? MOF Chatbot: MOF Chatbot: The paper discussing MOF-419 is titled "Metal-Organic Frameworks with Rod Yttrium Secondary Building Units" and was published in the Isr. J. Chem. journal. Human: Thank you. Can you give me a link to get access to the paper? Human: Thank you. Can you give me a link to get access to the paper? MOF Chatbot:
2306.11296#320
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11698
320
# 10.3 Fairness evaluation under demographically imbalanced context in few-shot learning In this part, we aim to explore whether the fairness of model predictions is affected by the demo- graphically imbalanced (unfair) context provided by the few-shot examples. Evaluation setup. We similarly transform the tabular data in Adult [15] into language descriptions and ask GPT models to perform the classification tasks. The sensitive attribute sex is selected, and 55
2306.11698#320
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11296
321
Human: Thank you. Can you give me a link to get access to the paper? MOF Chatbot: MOF Chatbot: Unfortunately, I cannot provide a specific link to access the paper as I do not have information on the journal's access policies. However, you can try searching for the paper by its title "Metal-Organic Frameworks with Rod Yttrium Secondary Building Units" in a search engine or a scientific paper database. Some databases that you can use are Google Scholar, PubMed, and ResearchGate. Figure S73. A conversation with the MOF Chatbot based on gpt-3.5-turbo model. 82 # Human-MOF Chatbot Interaction Human: Is it possible to make single crystals of MOF-801?
2306.11296#321
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11698
321
55 A = 0 denotes female and A = 1 denotes male. We consider 32 few-shot training instances here since it is the maximum number of examples we can have given the token number limitation of GPT models. We construct three contexts based on different demographical imbalance levels with base rate parity bPc = 0.0, 0.5, 1.0. A large base rate parity bPc indicates the bias towards a positive prediction Y = 1 (i.e., high salary) conditioned on A = 1 (i.e., male) over A = 0 (i.e., female). Similarly, we sample 200 samples as the test set. We fix the base rate parity of the test set bPt as 0.0 to demonstrate the bias induced from the training context. Results. Table 31 shows that when the training context is more demographically imbalanced (i.e., a larger base rate parity bPc), the predictions of GPT models become less fair (i.e., larger Mdpd and Meod ). We find that only 32 samples with group bias in the context can affect the fairness of GPT model predictions very effectively. The demographic parity difference Mdpd of GPT-3.5 is increased from 0.033 to 0.12, and that of GPT-4.0 is increased from 0.10 to 0.28. This conclusion also holds for the metric of equalized odds difference Meod.
2306.11698#321
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11296
322
MOF Chatbot: Yes, it is possible to make single crystals of MOF-801. In fact, the paper "Water Adsorption in Porous Metal- Organic Frameworks and Related Materials" (DOI: 10.1021/ja500330a) reports the synthesis and characterization of MOF-801 in its single crystal form (MOF-801-SC). The synthesis was performed using ZrOCl2·8H2O as the metal source, fumaric acid as the linker, formic acid as the modulator, DMF as the solvent, and a reaction temperature of 120°C for 24 h in a 60 mL screw capped glass vessel. The product obtained was octahedral colorless single crystals.
2306.11296#322
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11296
323
Human: What is the application of this MOF and why it is important to make it single crystal? Can | make powdered MOF-801? Human: What is the application of this MOF and why it is important to make it single crystal? Can I make powdered MOF-801? MOF Chatbot: MOF-801 has potential applications in water adsorption, dehumidification, and delivery of drinking water in remote areas. It is important to make single crystals of MOF-801 because they allow for more accurate structural characterization, which is important for understanding the properties and behavior of the material. Powdered MOF-801 can also be made and studied for its properties, but single crystals provide more detailed information. Therefore, both single crystal and powdered forms of MOF-801 have their own importance and applications. Human: How do they differ in terms of synthesis procedure?
2306.11296#323
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11698
323
In this part, we aim to explore how the fairness of model predictions is affected by the number of demographically balanced (fair) examples in the few-shot setting. Evaluation setup. We similarly transform the tabular data in the Adult dataset into language descriptions and ask GPT models to perform classification tasks. The sensitive attribute is selected as sex, and A = 0 denotes female and A = 1 denotes male. We randomly select 200 test samples with the constraint of base rate parity bPt = 0.5 for fair comparisons across evaluations with different numbers of few-shot examples. We perform the evaluation with 0, 16, 32 few-shot instances with base rate parity bPc = 0. In other words, we want to study whether the predictions of GPT models become fairer given more demographically balanced (fair) examples in few-shot learning. Results. Table 33 indicates that with a larger number of demographically balanced few-shot examples, the model predictions become fairer, and the accuracy of GPT models on biased test sets decreases. The observation demonstrates that the bias of GPT models towards certain groups can be reduced by adding balanced few-shot training examples, which is aligned with the previous finding on
2306.11698#323
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11296
324
Human: How do they differ in terms of synthesis procedure? Human: How do they differ in terms of synthesis procedure? MOF Chatbot: Based on the information available from the MOF paper I have read so far, MOF-801 can be synthesized as both a single crystal and a microcrystalline powder form. However, the synthesis procedures are different for each form of MOF-801. The synthesis procedure for MOF-801 single crystal involves the use of ZrOCl2·8H2O as metal source, fumaric acid as linker, formic acid as modulator, DMF as solvent, and a 60 mL screw capped glass for 24 hours at 120°C. On the other hand, MOF-801 microcrystalline powder is synthesized using the same metal source and linker but with different amounts of modulator (benzoic acid) and DMF as solvent. Human: What other MOFs does this paper also report?
2306.11296#324
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11698
324
demonstrates that the bias of GPT models towards certain groups can be reduced by adding balanced few-shot training examples, which is aligned with the previous finding on GPT- 3 [155]. Moreover, we observe that involving only 16 demographically balanced (fair) few-shot examples is already effective enough in guiding the predictions of GPT models to be fairer. Note that the prediction accuracy of GPT models also decreases with more demographically balanced few-shot examples due to the potential tradeoff between accuracy and fairness.
2306.11698#324
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11296
325
Human: What other MOFs does this paper also report? Human: What other MOFs does this paper also report? MOF Chatbot: The paper reports on the water adsorption properties of 23 materials, 20 of which are metal-organic frameworks (MOFs). Among the MOFs are 10 zirconium(IV) MOFs, with the following reported for the first time: MOF-801-SC (single crystal form), MOF-802, MOF-805, MOF-806, MOF-808, MOF-812, and MOF-841. MOF-801-P (microcrystalline powder form) was reported earlier and studied for its water adsorption properties. MOF-812 was only made and structurally characterized but not examined for water adsorption because it is a byproduct of MOF-841 synthesis. Human: Thanks. Last question, where can | find this paper? Human: Thanks. Last question, where can I find this paper? MOF Chatbot: You can find the paper on the ACS Publications website using the Digital Object Identifier (DOI) associated with each paper. Here are the DOIs for the papers mentioned in the context:
2306.11296#325
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11698
325
Takeaways. • GPT-4 is more accurate than GPT-3.5 given demographically balanced test data (controlled by the base rate parity), while GPT-4 also achieves higher unfairness scores under unbalanced test data, indicating the accuracy-fairness tradeoffs. • In the zero-shot setting, both GPT-3.5 and GPT-4 have large performance gaps across test groups with different base rate parity considering different sensitive attributes, indicating that GPT models are intrinsically biased to certain groups. Some attributes, such as sex and race, lead to more severe fairness issues for GPT models. • In the few-shot setting, the performance of both GPT-3.5 and GPT-4 are influenced by the base rate parity of the constructed few-shot examples. More demographically imbalanced (unfair) few-shot examples will induce more biased predictions for GPT models. • The fairness of GPT models can be improved by providing a more demographically balanced (fair) training context. Involving only a few demographically balanced few-shot examples (e.g., 16 samples) can effectively guide GPT models to be fairer.
2306.11698#325
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
326
Table 33: Accuracy (ACC (%)), demographic parity difference (Mdpd), and equalized odds difference (Meod) on Adult dataset with different #shot in the in-context learning. The base rate parity of the few-shot examples bPc is fixed as 0.0, and the base rate parity of the test set is fixed as 0.5. # shot = 16 ACC ↑ Mdpd ↓ Meod ↓ ACC ↑ Mdpd ↓ Meod ↓ ACC ↑ Mdpd ↓ Meod ↓ # shot = 0 # shot = 32 Model GPT-3.5 GPT-4 73.0 85.5 0.46 0.71 0.49 0.95 67.5 78.0 0.25 0.38 0.084 0.27 63.5 75.0 0.19 0.30 0.10 0.13 56 # 11 Related work
2306.11698#326
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11296
327
1. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A., Language models are few-shot learners. NIPS 2020, 33, 1877- 1901. 2. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; Polosukhin, I., Attention is all you need. NIPS 2017, 30. 3. Zheng, Z.; Rong, Z.; Iu‐Fan Chen, O.; Yaghi, O. M., Metal‐Organic Frameworks with Rod Yttrium Secondary Building Units. Isr. J. Chem. 2023, e202300017. 4. Nguyen, H. L.; Gándara, F.; Furukawa, H.; Doan, T. L.; Cordova, K. E.; Yaghi, O. M., A titanium–organic framework as an exemplar of combining the chemistry of metal–and covalent– organic frameworks. J. Am. Chem. Soc. 2016, 138 (13),
2306.11296#327
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11698
327
The evaluation of large language models plays a critical role in developing LLMs and has recently gained significant attention. This section presents a comprehensive overview of the existing research and approaches that focus on assessing the capabilities of LLMs from different perspectives. Benchmarks on LLMs toxicity. While LLMs have demonstrated substantial performance gains on various NLP tasks, recent studies [116, 172] show that generative LMs would generate toxic and biased languages, which raises ethical concerns for their safe deployment in real-world applications. To quantify the toxicity in LLM generations, researchers have proposed several datasets, including RE- ALTOXICITYPROMPTS [62] and BOLD [48], which ask LLMs to perform conditional generation and complete the sentence given an incomplete task prompt from the datasets. These datasets derive their task prompts from diverse web sources, ensuring broad context coverage and a range of toxicity levels. For instance, REALTOXICITYPROMPTS [62] obtains its task prompts from OpenWebText [65] and presents a stratified toxicity sample in four distinct bins: [0, 0.25), [0.25,
2306.11698#327
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11296
328
framework as an exemplar of combining the chemistry of metal–and covalent– organic frameworks. J. Am. Chem. Soc. 2016, 138 (13), 4330-4333. 5. Furukawa, H.; Gandara, F.; Zhang, Y.-B.; Jiang, J.; Queen, W. L.; Hudson, M. R.; Yaghi, O. M., Water adsorption in porous metal–organic frameworks and related materials. J. Am. Chem. Soc. 2014, 136 (11), 4369-4381. 6. Hanikel, N.; Kurandina, D.; Chheda, S.; Zheng, Z.; Rong, Z.; Neumann, S. E.; Sauer, J.; Siepmann, J. I.; Gagliardi, L.; Yaghi, O. M., MOF Linker Extension Strategy for Enhanced Atmospheric Water Harvesting. ACS Cent. Sci. 2023, 9 (3), 551-557. 7. Jiang, J.; Furukawa, H.; Zhang, Y.-B.; Yaghi, O. M., High methane storage working capacity in metal–organic frameworks with acrylate links. J. Am. Chem. Soc. 2016, 138 (32),
2306.11296#328
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11698
328
task prompts from OpenWebText [65] and presents a stratified toxicity sample in four distinct bins: [0, 0.25), [0.25, 0.5), [0.5, 0.75), [0.75, 1]. BOLD [48] samples its task prompts from Wikipedia, covering a variety of topics, including profes- sions, gender, race, religion, and political ideology. Both datasets leverage PerspectiveAPI [99] to evaluate the toxicity of LLM generations. Despite known imperfections and possible demographic biases [198, 192, 62], the automatic evaluations performed by PerspectiveAPI are shown generally align with human judgments [62, 106, 177]. As a result, the evaluation platform HLEM [106] incor- porates these datasets with PerspectiveAPI for toxicity detection. Following the HELM methodology and existing datasets, our work also employs PerspectiveAPI for toxicity evaluation. Furthermore, we propose a novel benchmark with more challenging task prompts specifically designed to further expose model toxicity in different settings. Benchmarks on stereotype bias in LLMs outputs. Several studies have been provided to explore and measure the stereotype
2306.11698#328
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11296
329
O. M., High methane storage working capacity in metal–organic frameworks with acrylate links. J. Am. Chem. Soc. 2016, 138 (32), 10244-10251. 8. Hayashi, H.; Côté, A. P.; Furukawa, H.; O’Keeffe, M.; Yaghi, O. M., Zeolite A imidazolate frameworks. Nat. Mater. 2007, 6 (7), 501-506. 9. Luo, Y.; Bag, S.; Zaremba, O.; Cierpka, A.; Andreo, J.; Wuttke, S.; Friederich, P.; Tsotsalas, M., MOF synthesis prediction enabled by automatic data mining and machine learning. Angew. Chem. Int. Ed. 2022, 61 (19), e202200242. 10. Nandy, A.; Duan, C.; Kulik, H. J., Using machine learning and data mining to leverage community knowledge for the engineering of stable metal–organic frameworks. J. Am. Chem. Soc. 2021, 143 (42), 17535-17547. 11. Park, H.; Kang, Y.; Choe, W.; Kim, J., Mining Insights on Metal–Organic Framework Synthesis from Scientific Literature
2306.11296#329
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11698
329
expose model toxicity in different settings. Benchmarks on stereotype bias in LLMs outputs. Several studies have been provided to explore and measure the stereotype bias of the outputs of language models [59, 106, 5, 123, 125]. Brown et al. [28] analyzes bias in GPT-3 by utilizing prompt completion and co-occurrence tests. For instance, the authors perform a study on associations between gender and occupation and find that 83% of the 388 occupations tested are more likely to be followed by a male identifier based on the outputs of GPT-3. The authors also perform co-occurrence tests by generating 800 outputs given a prompt that encoded a specific demographic group and observing the sentiment of the adjectives and adverbs occurring in the completion. This test is performed across three categories - gender (e.g., “{Gender pronoun} would be described as ”), race (e.g., “People would describe the {race} person as ”), and religion (e.g., “{Religion practitioners} are ”). Across all three categories, the authors find that GPT-3 reflects stereotypes present in their training data. Abid et al. [5] demonstrate that
2306.11698#329
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11296
330
11. Park, H.; Kang, Y.; Choe, W.; Kim, J., Mining Insights on Metal–Organic Framework Synthesis from Scientific Literature Texts. J. Chem. Inf. Model. 2022, 62 (5), 1190-1198. 12. Park, S.; Kim, B.; Choi, S.; Boyd, P. G.; Smit, B.; Kim, J., Text mining metal–organic framework papers. J. Chem. Inf. Model. 2018, 58 (2), 244-251. 13. Shannon, R. D., Revised effective ionic radii and systematic studies of interatomic distances in halides and chalcogenides. Acta Crystallogr. A . 1976, 32 (5), 751-767. 14. Haynes, W. M., CRC handbook of chemistry and physics. CRC press: 2016. 15. Pauling, L., The nature of the chemical bond. IV. The energy of single bonds and the relative electronegativity of atoms. J. Am. Chem. Soc. 1932, 54 (9), 3570-3582. 16. Nguyen, K. T.; Blum, L. C.; Van Deursen, R.; Reymond, J. L., Classification of organic molecules by molecular quantum
2306.11296#330
ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPT's tendency to hallucinate information -- an issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct synthesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incorporates our ChemPrompt Engineering strategy to instruct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 86% accuracy in predicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crystallization. We also developed a reliable data-grounded MOF chatbot to answer questions on chemical reactions and synthesis procedures. Given that the process of using ChatGPT reliably mines and tabulates diverse MOF synthesis information in a unified format, while using only narrative language requiring no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be very useful across various other chemistry sub-disciplines.
http://arxiv.org/pdf/2306.11296
Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi
cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph
Published on Journal of the American Chemical Society (2023); 102 pages (18-page manuscript, 84 pages of supporting information)
J. Am. Chem. Soc. 2023, 145, 32, 18048-18062
cs.IR
20230620
20230720
[]
2306.11698
330
are ”). Across all three categories, the authors find that GPT-3 reflects stereotypes present in their training data. Abid et al. [5] demonstrate that GPT-3 has a persistent Muslim-violence bias. The authors perform tasks such as prompt completion, analogical reasoning, and story generation to quantify the probability of GPT-3 outputting something violent when the subject is Muslim. The authors find that violent outputs are disproportionately higher for Muslims than other religious groups. Parrish et al. [136] introduces a dataset called the Bias Benchmark for QA (BBQ), which consists of hand-written ambiguous and disambiguated contexts, based on which the language models answer questions such as “Who steals things?”. The dataset consists of 58,492 examples and focuses on nine different categories (e.g., age, disability status, gender identity, etc.) and tests the likelihood of the models relying on stereotypes when answering questions. The authors tested the dataset on the UnifiedQA’s 11B parameter model, RoBERTa, and DeBERTaV3, and find that when the context is disambiguated, the models are fairly successful at giving answers
2306.11698#330
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
331
11B parameter model, RoBERTa, and DeBERTaV3, and find that when the context is disambiguated, the models are fairly successful at giving answers that go against known social biases (indicating better debiasing when the context is disambiguated); however, under ambiguous context, they find that the models rely on social biases to different degrees for prediction (e.g., biases related to physical appearance affected the responses more than biases related to race, sexual orientation, etc.) Liang et al. [106] utilize the BBQ dataset for their bias and stereotype study in which they evaluate 30 models (including GPT-3 and InstructGPT). The authors find that the vast majority of models tested by them show biases that are different from the broader societal marginalization/biases. This might indicate that the efforts paid for debiasing language models are effective to some extent, which is aligned with some of our observations. Our stereotype evaluation complements the above studies by presenting a different perspective for evaluating bias - by directly prompting the GPT models to output their view on stereotype statements. We also utilize system prompts in our benchmark as an effective way of manipulating model responses,
2306.11698#331
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
332
the GPT models to output their view on stereotype statements. We also utilize system prompts in our benchmark as an effective way of manipulating model responses, showcasing their impacts on the model biases. We have incorporated recommendations from [21, 20] by ensuring that our dataset contains stereotypes that are straightforward, avoid stereotype conflation, and have well-documented evidence of their negative impact on the affected demographic groups.
2306.11698#332
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
334
Benchmarks on the robustness of LLMs against adversarial texts. The robustness of large language models (LLMs) has been a great concern in practice. As one of the early works trying to gauge the robustness of LLMs, Wang et al. [176] introduces AdvGLUE [176], a multi-task benchmark designed to evaluate the vulnerabilities of LLMs under various types of adversarial attacks. The study systematically applies 14 textual adversarial attack methods to GLUE tasks to construct AdvGLUE, which is then validated by humans for reliable annotations. Furthermore, under the context of GPT models, Wang et al.[181] utilizes the dev set of AdvGLUE [176] and ANLI [126] to evaluate the adversarial robustness of GPT-3.5. The results indicate that GPT-3.5 shows consistent advantages in classification and translation tasks. However, the absolute performance is not perfect, suggesting that adversarial robustness still remains a significant challenge for GPT models. In addition, as prompt engineering unlocks the immense capabilities of GPT models, their vulnerabilities to adversarial prompts has attracted the attention of research community. To measure the resilience of LLMs to
2306.11698#334
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
335
capabilities of GPT models, their vulnerabilities to adversarial prompts has attracted the attention of research community. To measure the resilience of LLMs to adversarial prompts, Wang et al. [181] designs PromptBench [181] using a wide range of textual adversarial attacks at various levels (character, word, sentence, and semantic) and applies them to different tasks. Their results show that current LLMs are vulnerable to adversarial prompts. The study also provides a detailed analysis of prompt robustness and its transferability, as well as practical recommendations for prompt composition, which would be helpful for different communities. In our work, we evaluate the robustness of GPT-4 and GPT-3.5 on AdvGLUE, and further generate adversarial texts against several existing autoregressive models to test the robustness of advanced GPT models. We show that although GPT models are more robust on the existing benchmarks, they are still vulnerable to advanced attacks and different adversarial prompts. Benchmarks on the robustness of LLMs against out-of-distribution texts. In addition to adversar- ial robustness, the robustness to out-of-distribution
2306.11698#335
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
336
robustness of LLMs against out-of-distribution texts. In addition to adversar- ial robustness, the robustness to out-of-distribution (OOD) inputs is another critical topic for LLMs [131, 147, 92, 118, 13]. In the context of pre-trained language models, several benchmarks have been proposed in the past to evaluate their OOD robustness given in-distribution training datasets and their corresponding OOD testing datasets [199, 56, 204, 72]. However, such direct evaluation of OOD robustness in a zero-shot context using these benchmarks presents challenges for LLMs [106], particularly for GPT models, due to the inaccessibility of web-scale pre-training and instruction tuning data. To circumvent this issue, one approach is to leverage synthesized data as the OOD test data, which includes various text transformations (e.g., misspellings, synonym substitutions, etc.) [106, 64, 68]. This approach provides an assessment of model robustness by testing the model performance given a wide range of textual transformations that are considered rare in the training and instruction tuning distributions. In addition to the synthesized dataset, Wang et al. [181]
2306.11698#336
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
337
that are considered rare in the training and instruction tuning distributions. In addition to the synthesized dataset, Wang et al. [181] proposes to leverage datasets that are obtained after the data collection date of GPT models for testing, thereby introducing a temporal distribution shift [7]. Furthermore, to evaluate the OOD robustness in the context of in-context learning, recent studies [204, 155, 119] have undertaken assessments using test inputs from standard benchmarks, with demonstrations sourced from varying distributions. This allows for a more detailed analysis of the model’s capability to generalize from the demonstration distribution to the test distribution. In this work, we provide a comprehensive OOD robustness evaluation and construct OOD data by leveraging diverse text transformations, OOD knowledge, and OOD domains in both zero-shot and in-context learning settings. Benchmarks on the robustness of LLMs against adversarial demonstrations via in-context learning. In-context learning aims to adapt LLMs to downstream tasks by using several demonstration examples as the model input [28]. Since it does not require further finetuning or parameter updates, the performance of in-context learning represents the intrinsic
2306.11698#337
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
338
examples as the model input [28]. Since it does not require further finetuning or parameter updates, the performance of in-context learning represents the intrinsic capabilities of LLMs. Going beyond evaluating in-context learning on traditional benchmarks [28, 107, 211], researchers have proposed more challenging benchmarks [160, 121, 185, 150] for in-context learning to explore the potential of LLMs. Another line of research is to evaluate the robustness of in-context learning and understand the role of demonstrations. Lu et al. [111] evaluates the order sensitivity of the demonstration examples. Min et al. [119] and Kim et al. [90] study the role of the ground-truth labels of the demonstration examples. Wei et al. [191] studies how semantic priors of the label space would affect in-context learning. Wang et al. [182] studies if constructing adversarial demonstrations without changing the test input would affect model predictions. Complementary to this work [182], our evaluation on robustness of LLMs against adversarial demonstrations further categorizes the demonstrations into counterfactual examples, examples with spurious correlations, and backdoored examples, and explores the
2306.11698#338
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
339
demonstrations further categorizes the demonstrations into counterfactual examples, examples with spurious correlations, and backdoored examples, and explores the relationships between the test inputs and the demonstrations. Benchmarks on the privacy of LLMs. To pretrain LLMs, a significant amount of web-scraped data is often utilized as training data. However, such data often contain privacy-sensitive information, e.g.,
2306.11698#339
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
341
personally identifiable information (PII), which raises great concerns regarding the possible leakage of private data from LLMs. Prior works have shown that the training data can be extracted from pretrained language models base on prediction likelihood [30, 120] or only API access [31, 78, 33, 208, 112, 100, 149]. For instance, Carlini et al. [31] scrape data from the Internet and find that, when conditioned on the prefixes, GPT-2 could generate verbatim text sequences as found in the scraped data. Moreover, Carlini et al. [33] leverage the pretrained dataset of GPT-Neo to construct the prefixes (i.e., context) as the prompt for GPT-Neo models, and demonstrate that the model’s memorization of training data scales with the model scale, data repetition, and the context length. Similarly, it has been observed that GPT-Neo models can memorize sensitive information such as email addresses or phone numbers from the Enron Email dataset [78, 149]. Lukas et al. [112] comprehensively evaluate the PII leakage via black-box extraction, inference, and reconstruction attacks against GPT-2 models fine-tuned with and
2306.11698#341
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
342
comprehensively evaluate the PII leakage via black-box extraction, inference, and reconstruction attacks against GPT-2 models fine-tuned with and without defense methods (e.g., differential privacy). To exact PII from the recent ChatGPT model, Li et al. [100] propose multi-step jailbreaking prompts as stronger privacy threats. To mitigate the privacy leakage risks of LLMs, researchers employ techniques such as de-duplication of training data to reduce the probability of LLMs memorizing training data, thereby enhancing their security against privacy attacks [98, 83]. To provide formal privacy guarantees, Differential Privacy (DP) [53] has been widely adopted. One common approach to achieve DP is applying DP-SGD [3] during LLM training, which involves clipping the per-sample gradient and adding noise. Yu et al. [203] investigate different parameter-efficient fine-tuning methods using DP-SGD for LLMs, achieving a promising balance between privacy and utility. Li et al. [104] introduce a novel memory-saving clipping technique, which enhances the efficiency of fine-tuning Transformers under DP-SGD.
2306.11698#342
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
343
Li et al. [104] introduce a novel memory-saving clipping technique, which enhances the efficiency of fine-tuning Transformers under DP-SGD. Another line of work focuses on fine-tuning LLMs like GPT-2 under DP-SGD and generating synthetic text datasets for sharing [113, 205]. Such synthetic text data can be used to train NLP models on downstream tasks non-privately (i.e., without DP-SGD), which would lead to higher utility. Instead of protecting the privacy of each individual training sample as required by DP, several works explore the notion of selective-DP [210, 151], where only the chosen sensitive information (e.g., PII) within each training sample needs to be protected. In addition to protecting the privacy of training data, recent studies propose DP in-context learning methods for LLMs to protect the privacy of the prompt information during inference [134, 51]. Our work takes the initial step to study the privacy risks associated with the recent GPT-3.5 and GPT-4 models, not only from the perspectives of private training data but also the private information injected during inference. Benchmarks on machine ethics of LLMs. Ethics are principles and standards of
2306.11698#343
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
344
from the perspectives of private training data but also the private information injected during inference. Benchmarks on machine ethics of LLMs. Ethics are principles and standards of behavior that guide people in making decisions, which are helpful in promoting good values such as respect and goodwill and preventing harm to individuals and the environment. Hence, ethics play a significant role in shaping the way we live, work, and interact with one another. As artificial intelligence and other advanced technologies continue to develop and integrate into various aspects of our lives, machine ethics, i.e., the implementation of ethical principles and guidelines for AI systems, is becoming increasingly important. Recently, language models have experienced a surge in popularity due to their ability to interact with humans in a conversational manner and generate human-like text. A language model without machine ethics may generate responses that are detrimental to human values and social norms. Therefore, benchmarks on the machine ethics of language models are in great demand. ETHICS [73] proposes diverse contextualized natural language scenarios to assess a language model’s basic knowledge of different ethical concepts that convey justice, deontology, virtue ethics, utilitarianism, and commonsense moral judgments. To enable a
2306.11698#344
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
345
language model’s basic knowledge of different ethical concepts that convey justice, deontology, virtue ethics, utilitarianism, and commonsense moral judgments. To enable a rich variety of reasoning about legality, cultural pressure, and the morality of each real-life scenario, SOCIAL- CHEM-101 [58] provides a large-scale corpus containing 292k rules-of-thumb, i.e., a descriptive cultural norm structured as the judgment of an action, which are mapped to 12 dimensions spanning social judgments of good and bad, theoretical categories of moral foundations, expected cultural pressure, and assumed legality. Similarly, in order to perform goal-oriented social reasoning, Moral Stories [54] provides a crowd-sourced dataset of structured narratives consisting of the goal, the normative and norm-divergent actions to accomplish the goal, and their respective consequences. In addition to assessing the ethical background knowledge of language models, various types of benchmarks are provided to explore different aspects of machine ethics. Jin et al. [82] proposes the moral exception question answering (MoralExceptQA) set consisting of cases that involve potentially permissible moral exceptions. Acharya et
2306.11698#345
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
346
[82] proposes the moral exception question answering (MoralExceptQA) set consisting of cases that involve potentially permissible moral exceptions. Acharya et al. [6] investigates ritual understanding across cultures. Besides, as a representative AI system to interact with humans, the artificial agents (including language-model agents and reinforcement-learning agents) in text-based interactions such as ad59
2306.11698#346
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
347
venture games should also be endowed with correct knowledge of machine ethics. Côté et al. [42], Shridhar et al. [154] and Hausknecht et al. [71] provide several procedurally generated text-based worlds as benchmarks, while lacking complex social interactions, which are crucial in studying agent behaviors in the real world. Jiminy Cricket [75] integrates 25 text-based adventure games with thousands of diverse scenarios and annotates every possible game state, thus providing abundant moral knowledge of an agent’s behavior. Similarly, MACHIAVELLI [133] introduces a benchmark consisting of 134 Choose-Your-Own-Adventure games, including over half a million diverse scenarios which focus on rich social concepts that are not limited to commonsense morality. Our work provides machine ethics evaluations for GPT-4 and GPT-3.5 on existing benchmarks, our designed adversarial prompts and evasive sentences, and different conditioned behaviors with specific properties. Benchmarks on the fairness of LLMs. Fairness of machine learning models is an active research area to ensure that the models are reliable and free from bias [52, 117, 35, 89, 16, 4, 143]. Although LLMs have
2306.11698#347
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
348
learning models is an active research area to ensure that the models are reliable and free from bias [52, 117, 35, 89, 16, 4, 143]. Although LLMs have demonstrated tremendous capabilities across variant tasks, the fairness of predictions is still a critical problem [212, 215, 127, 70, 108]. Therefore, a series of studies on the evaluations of LLM fairness have been conducted [156, 106, 105]. Socher et al. [156] examines whether GPT-3 produces unfair predictions in two downstream tasks, coreference resolution, and question answering. Liang et al. [106] evaluates the counterfactual fairness [95] by measuring the prediction invariance under perturbations on the speaker or the subject and the performance disparity by reporting model accuracy across different groups. However, the influence of unfair/fair few-shot examples and the bias of test distribution on the fairness of model predictions are not well studied. Li and Zhang [105] evaluates the fairness of ChatGPT given different in-context examples, which aligns with our observation in evaluations with unfair contexts but lacks formal characterization of the unfairness for the in-context examples. In this work,
2306.11698#348
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
349
which aligns with our observation in evaluations with unfair contexts but lacks formal characterization of the unfairness for the in-context examples. In this work, we conduct a comprehensive fairness evaluation for GPT-3.5 and GPT-4 by studying the fairness of model predictions in both zero-shot and few-shot settings. We also evaluate the impact of demographically imbalanced (unfair) demonstrations and the number of balanced (fair) demonstrations on the fairness of GPT models. Related work on prompt hacking. Thanks to the improved capabilities of LLMs to follow in- structions after instruction tuning [189, 38] and Reinforcement Learning with Human Feedback (RLHF) [132], users can configure the tone and role of LLMs via system prompts, and configure the task description and task prompts via user prompts. However, these new capabilities also raise new trustworthiness concerns and introduce a new type of attack named Prompt Hacking [97]. Recent research mainly covers three main types of prompt hacking, including prompt injection, prompt leaking, and jailbreaking prompts. Prompt injection involves adding malicious or unintended content to a prompt to hijack the language model’s output and
2306.11698#349
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
350
and jailbreaking prompts. Prompt injection involves adding malicious or unintended content to a prompt to hijack the language model’s output and mislead the model to output a specific string. For example, PromptInject [137] inserts potentially harmful content into the prompt to mislead LLMs to deviate from the task outlined in the original prompt. In addition, PromptInject also explores prompt leaking, which attempts to print out and leak the original prompt. However, PromptInject only studies GPT-3, and the provided handcrafted prompts can only serve as a simple trial to reveal the vulnerability of GPT-3. There are also other works [66, 196, 197, 67] exploring the possibility of misleading GPT-based applications. Jailbreaking prompts intend to bypass the safety and moral values in LLMs and induce models to generate harmful content for users. For example, inspired by traditional computer security, [84] treats GPT models (ChatGPT, GPT-3, and InstructGPT model series) as computer programs and proposes code injection prompts to bypass OpenAI’s policies and results in toxic generations. [46] crafts jailbreaking
2306.11698#350
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
351
as computer programs and proposes code injection prompts to bypass OpenAI’s policies and results in toxic generations. [46] crafts jailbreaking prompts called DAN (Do Anything Now) which remove OpenAI’s restrictions on content generation and let GPT-4 role-play a new language model that can do anything now and is likely to obey all task descriptions regardless of any policy-related concern. A token system is additionally proposed to penalize GPT-4 if it rejects to answer. In contrast, our designed jailbreaking prompts not only successfully elicit toxicity in LLM generations but also manage to mislead GPT models from various perspectives, such as making GPT models fail to recognize commonsense immoral behaviors. In terms of eliciting toxicity, we also consider different eliciting types apart from role-playing, such as saying the opposite and replacing word meaning. Hence, we introduce a wider range of jailbreaking prompts, fostering a multifaceted exploration of adversarial/misleading prompts posed to language models. Regulations related to the trustworthiness of LLMs. The trustworthiness of LLMs and other AI systems has also been a key focus of
2306.11698#351
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
352
prompts posed to language models. Regulations related to the trustworthiness of LLMs. The trustworthiness of LLMs and other AI systems has also been a key focus of policymakers. As the first work of comprehensive legislation proposed by a major regulator, the European Union’s draft Artificial Intelligence Act (AIA) provides a risk-based regulatory framework that prescribes regulatory requirements [40] for AI systems based on their risk levels, including different trustworthiness perspectives discussed in this work. This
2306.11698#352
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
354
legislation requires high-risk AI systems – AI systems deployed in critical applications specified by the AIA (AIA ANNEX III of [40]), such as law enforcement – to undergo a rigorous compliance assessment before public deployment. Due to the constantly evolving nature of most AI systems, a continuous post-market monitoring system is also mandated for such systems, ensuring that any significant changes or issues are promptly detected and addressed. Of notable importance to this work, AIA requires high-risk AI systems that undergo constant updates to ensure that potentially biased outputs due to feedback loops are addressed with appropriate mitigation measures (Article 15-3 of [40]). In addition, AIA identifies “technical robustness” as a key requirement for high-risk AI systems. It stipulates that high-risk AI systems should be resilient against risks arising from model limitations, such as “unexpected situations” and malicious actions (Article 15-3 and 15-4 of [40]). More importantly, at the time of writing, the newly adopted draft legislation by the European Parliament requires technical solutions that address AI-specific vulnerabilities to conform with AIA to mitigate data poisoning, model
2306.11698#354
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
355
newly adopted draft legislation by the European Parliament requires technical solutions that address AI-specific vulnerabilities to conform with AIA to mitigate data poisoning, model poisoning (backdoor), adversarial examples, and “confidentiality attacks” (Amendment 329 of [135]). These specifications are highly relevant to our discussions about adversarial robustness, out-of-distribution robustness, and privacy. In light of the recent developments of (generative) machine learning models, the European Parliament also includes additional provisions in the draft legislation to extend the proposed regulations into scenarios in which foundation models are provided as a service through API access and require proper disclosure of AI-generated content. It also recognizes the need to develop techniques for the conformity assessment of foundation models through “model evaluation, red-teaming or machine learning verification and validation techniques” (Amendment 102 of [135]). In addition to the European Union, the United States has also proposed several policy initiatives regulating AI systems at the federal level. Most notably, the White House Office of Science and Technology Policy (OSTP) has proposed the AI Bill of Rights [195], which outlines five principles, including safety, fairness, privacy,
2306.11698#355
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
356
House Office of Science and Technology Policy (OSTP) has proposed the AI Bill of Rights [195], which outlines five principles, including safety, fairness, privacy, interpretability, and human-in-the-loop interventions. In response to the changing regulatory landscape, the research community has also proposed proce- dures to assess the compliance of existing AI systems to the proposed regulations. For example, [23] evaluates the major foundation model providers following the requirements of the AIA at different stages of the life cycle for a foundation model. [57] proposes a technical evaluation procedure for conducting compliance assessments of AI systems in the context of AIA.
2306.11698#356
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
358
In this work, we provide comprehensive evaluations of the trustworthiness of GPT-4 and GPT-3.5 from different perspectives, including toxicity, bias on stereotypes, robustness on adversarial attacks, robustness on OOD examples, robustness against adversarial demonstrations, privacy, ethics, and fairness. We find that, in general, GPT-4 performs better than GPT-3.5 under different metrics; however, when there are jailbreaking or misleading (adversarial) system prompts or demonstrations via in-context learning, GPT-4 is much easier to manipulate since it follows the instructions more precisely, which raises additional concerns. In addition, based on our demonstrations, there are many factors and properties of the inputs that would affect the model’s trustworthiness – which is worth further exploration. We also extend our evaluation beyond GPT-3.5 and GPT-4, supporting more open LLMs to help model practitioners assess the risks of different models with DecodingTrust in App. I. Given our evaluations and the vulnerabilities of GPT models, we provide the following potential future directions to further explore other
2306.11698#358
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
359
with DecodingTrust in App. I. Given our evaluations and the vulnerabilities of GPT models, we provide the following potential future directions to further explore other vulnerabilities, as well as safeguard LLMs against these vulnerabilities. • Evaluations with more interactions. In this work, we mainly evaluate different perspectives of trustworthiness for GPT models on static datasets, such as 1-2 rounds of conversations. Given the dynamic nature of large language models, it would be important to evaluate the LLMs with interactive conversations and assess whether these vulnerabilities of the models would become more severe or not. • Misleading context beyond jailbreaking system prompts and demonstrations in in-context learning. In order to evaluate potentially the worst-case performance of GPT models, we design different jailbreaking system prompts and diverse misleading (adversarial) demonstrations to evaluate the model vulnerabilities. In addition to such misleading prompts, one can also inject misleading information during the conversation (e.g., “honeypot conversation") to mislead the model performance. It would be interesting to see how vulnerable the model is under different types of misleading contexts.
2306.11698#359
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
361
Evaluation considering coordinated adversaries. In this work, we mainly consider one type of misleading or adversarial cases for each test scenario. However, in practice, it is possible that different adversaries would coordinate to fool the model given, say, strong economic incentives. Thus, it is important to explore how vulnerable the model could be under coordinated and stealthy adversarial behaviors. • Domain-specific trustworthiness evaluations. Our evaluations in this work focus on the general vulnerabilities of GPT models, and we use standard tasks such as sentiment classification and NLI tasks as illustrations. In practice, GPT models have already been widely adopted in different domains, such as laws and education, so it is important to evaluate the model vulnerabilities based on their specific usage in different domains. • Verification for the trustworthiness of GPT models. Empirical evaluations of LLMs are important but lack of guarantees, especially in safety-critical domains such rigorous guarantees would be critical. In addition, the discrete nature of GPT models makes it challenging to provide rigorous verification for such models. It would be important to divide the challenging problem into solvable sub-problems, such as providing guarantees and verification for the performance
2306.11698#361
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
362
verification for such models. It would be important to divide the challenging problem into solvable sub-problems, such as providing guarantees and verification for the performance of GPT models potentially based on their concrete functionalities [200, 188], providing verification based on the model abstractions, or mapping the discrete space to their corresponding continuous space such as the embedding space with semantic preservation to perform verification. • Safeguarding GPT models with additional knowledge and reasoning analysis. As purely data- driven models, GPT models would suffer from the imperfection of the training data and lack of reasoning capabilities in various tasks. Thus, it would be important to equip domain knowledge and logical reasoning capabilities for language models and safeguard their outputs to make sure they satisfy basic domain knowledge or logic to ensure the trustworthiness of the model outputs, such as retrieval-augmented pretraining [180, 179]. • Safeguarding GPT models based on game-theoretic analysis. Our designed system prompts based on “role-playing" shows that models can be easily fooled based on role-changing and manipulation. This indicates that during the conversation of GPT models, it is possible to design diverse roles
2306.11698#362
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
363
be easily fooled based on role-changing and manipulation. This indicates that during the conversation of GPT models, it is possible to design diverse roles to ensure the consistency of the model’s answers, and therefore at least avoid the models being self-conflict. It is also possible to design different roles for the models to make sure it understands the context better to provide more informative and trustworthy answers. • Auditing GPT models based on given instructions and contexts. Our evaluations here are based on general purpose, and sometimes users would have specific safety or trustworthiness requirements which are important to enforce the models to follow. Thus, it is important to map the user requirements and instructions to certain logical spaces or design specific contexts and verify whether the models’ outputs satisfy these requirements in order to audit the model more efficiently and effectively. • Auditing GPT models based on given instructions and contexts. Our evaluations are based on general-purpose uses, and sometimes users may have specific safety or trustworthiness requirements that are important to enforce the models to follow. Thus, it is important to map the user requirements and instructions to certain logical spaces or design specific contexts and verify whether the models’
2306.11698#363
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
365
Acknowledgements We sincerely thank Percy Liang, Tatsunori Hashimoto, and Chris Re for their valuable discussion and feedback on the manuscript. This work is partially supported by the National Science Foundation under grant No. 1910100, No. 2046726, No. 2229876, DARPA GARD, the National Aeronautics and Space Administration (NASA) under grant no. 80NSSC20M0229, Alfred P. Sloan Fellowship, the Amazon research award, and the eBay research grant. SK acknowledges support from the National Science Foundation under grants No. 2046795, 1934986, 2205329, and NIH 1R01MH116226-01A, NIFA award 2020-67021-32799, the Alfred P. Sloan Foundation, and Google Inc. # References [1] Jailbreak chat. https://www.jailbreakchat.com/. # [2] Shakespearean. https://lingojam.com/shakespearean. [3] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pages 308–318, 2016. 62
2306.11698#365
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
366
62 [4] R. Abebe, S. Barocas, J. Kleinberg, K. Levy, M. Raghavan, and D. G. Robinson. Roles for computing in social change. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2019. doi: 10.1145/3351095.3372871. [5] A. Abid, M. Farooqi, and J. Zou. Persistent anti-muslim bias in large language models, 2021. [6] A. Acharya, K. Talamadupula, and M. A. Finlayson. An atlas of cultural commonsense for machine reasoning. CoRR, abs/2009.05664, 2020.
2306.11698#366
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
367
machine reasoning. CoRR, abs/2009.05664, 2020. [7] O. Agarwal and A. Nenkova. Temporal effects on pre-trained models for language processing tasks. Transactions of the Association for Computational Linguistics, 10:904–921, 2022. [8] A. F. Akyürek, S. Paik, M. Kocyigit, S. Akbiyik, S. L. Runyun, and D. Wijaya. On mea- suring social biases in prompt-based multi-task learning. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 551–564, Seattle, United States, July 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-naacl.42. URL https://aclanthology.org/2022.findings-naacl.42.
2306.11698#367
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
368
[9] E. Almazrouei, H. Alobeidli, A. Alshamsi, A. Cappelli, R. Cojocaru, M. Debbah, E. Goffinet, D. Heslow, J. Launay, Q. Malartic, B. Noune, B. Pannier, and G. Penedo. Falcon-40B: an open large language model with state-of-the-art performance. 2023. [10] American Association of University Women. Barriers & bias: The status of women in leadership. https://www.aauw.org/resources/research/barrier-bias/. [11] Anti-Defamation League. Myth: Jews are greedy. https://antisemitism.adl.org/ greed/. [12] Anti-Defamation League. and https://www.adl.org/resources/tools-and-strategies/ Myths and facts about muslim people islam. myths-and-facts-about-muslim-people-and-islam, 2022.
2306.11698#368
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
369
[13] U. Arora, W. Huang, and H. He. Types of out-of-distribution texts and how to detect them. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10687–10701, Online and Punta Cana, Dominican Republic, Nov. 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.835. URL https:// aclanthology.org/2021.emnlp-main.835. [14] Association stereotypes. bad-drivers-no-just-bad-stereotypes.html, 2014. for Psychological Science. bad https://www.psychologicalscience.org/news/motr/ Bad drivers? no, just [15] A. Asuncion and D. Newman. Uci machine learning repository, 2007. [16] S. Barocas and A. D. Selbst. Big data’s disparate impact. California Law Review, 104:671, 2016. [17] S. W. Bender. Sight, sound, and stereotype: The war on terrorism and its consequences for lati- nas/os. Oregon Law Review, 81, 2002. URL https://digitalcommons.law.seattleu. edu/faculty/296.
2306.11698#369
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
370
[18] J. A. Berg. Opposition to pro-immigrant public policy: Symbolic racism and group threat. Sociological Inquiry, 83(1):1–31, 2013. doi: https://doi.org/10.1111/j.1475-682x.2012.00437. x. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1475-682x.2012. 00437.x. [19] S. Bird, E. Klein, and E. Loper. Natural language processing with Python: analyzing text with the natural language toolkit. " O’Reilly Media, Inc.", 2009. [20] S. L. Blodgett, S. Barocas, H. Daumé III, and H. Wallach. Language (technology) is power: A critical survey of “bias” in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454–5476, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.485. URL https://aclanthology.org/2020.acl-main.485.
2306.11698#370
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
371
[21] S. L. Blodgett, G. Lopez, A. Olteanu, R. Sim, and H. Wallach. Stereotyping Norwegian salmon: An inventory of pitfalls in fairness benchmark datasets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1004– 1015, Online, Aug. 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021. acl-long.81. URL https://aclanthology.org/2021.acl-long.81. 63 [22] T. Bolukbasi, K.-W. Chang, J. Zou, V. Saligrama, and A. Kalai. Man is to computer programmer as woman is to homemaker? debiasing word embeddings, 2016. [23] R. Bommasani, K. Klyman, D. Zhang, and P. Liang. Do foundation model providers comply with the eu ai act?, 2023. URL https://crfm.stanford.edu/2023/06/15/eu-ai-act. html.
2306.11698#371
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
372
[24] S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal, Sept. 2015. Association for Computational Linguistics. doi: 10.18653/v1/D15-1075. URL https: //aclanthology.org/D15-1075. [25] S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. A large annotated corpus for learning natural language inference. In L. Màrquez, C. Callison-Burch, J. Su, D. Pighin, and Y. Marton, editors, EMNLP, 2015. [26] Brookings ers? do-immigrants-steal-jobs-from-american-workers/, 2017. Institution. from american work- https://www.brookings.edu/blog/brookings-now/2017/08/24/ Do immigrants “steal” jobs
2306.11698#372
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
373
[27] H. Brown, K. Lee, F. Mireshghallah, R. Shokri, and F. Tramèr. What does it mean for a language model to preserve privacy? In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 2280–2292, 2022. [28] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. 2020.
2306.11698#373
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
374
[29] S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. T. Lee, Y. Li, S. Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. [30] N. Carlini, C. Liu, Ú. Erlingsson, J. Kos, and D. Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th USENIX Security Symposium, USENIX Security 2019, 2019. [31] N. Carlini, F. Tramer, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. B. Brown, D. Song, U. Erlingsson, et al. Extracting training data from large language models. In USENIX Security Symposium, volume 6, 2021.
2306.11698#374
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
375
[32] N. Carlini, J. Hayes, M. Nasr, M. Jagielski, V. Sehwag, F. Tramer, B. Balle, D. Ippolito, and E. Wallace. Extracting training data from diffusion models. In arXiv:2301.13188v1, 2023. [33] N. Carlini, D. Ippolito, M. Jagielski, K. Lee, F. Tramer, and C. Zhang. Quantifying memoriza- tion across neural language models. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=TatRHT_1cK. [34] B. J. Casad, P. Hale, and F. L. Wachs. Stereotype threat among girls: Differences by gender identity and math education context. Psychology of Women Quarterly, 41(4):513–529, 2017. doi: 10.1177/0361684317711412. URL https://doi.org/10.1177/0361684317711412. [35] S. Caton and C. Haas. arXiv:2010.04053, 2020. Fairness in machine learning: A survey. arXiv preprint
2306.11698#375
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
376
[35] S. Caton and C. Haas. arXiv:2010.04053, 2020. Fairness in machine learning: A survey. arXiv preprint [36] X. Chen, A. Salem, D. Chen, M. Backes, S. Ma, Q. Shen, Z. Wu, and Y. Zhang. Badnl: Backdoor attacks against nlp models with semantic-preserving improvements. In ACSAC, 2021. [37] W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. E. Gonzalez, I. Stoica, and E. P. Xing. Vicuna: An open-source chatbot impress- ing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/ 2023-03-30-vicuna/.
2306.11698#376
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
377
[38] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li, X. Wang, M. Dehghani, S. Brahma, A. Webson, S. Gu, Z. Dai, M. Suzgun, X. Chen, A. Chowdhery, D. Valter, S. Narang, G. Mishra, A. Yu, V. Zhao, Y. Huang, A. M. Dai, H. Yu, S. Petrov, E. Chi, J. Dean, 64 J. Devlin, A. Roberts, D. Zhou, Q. V. Le, and J. Wei. Scaling instruction-finetuned language models. ARXIV.ORG, 2022. doi: 10.48550/arXiv.2210.11416.
2306.11698#377
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
378
[39] CNN. Microsoft is bringing chatgpt technology to word, excel and outlook, 2023. URL https: //www.cnn.com/2023/03/16/tech/openai-gpt-microsoft-365/index.html. [40] E. Commission. Laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. https://eur-lex.europa.eu/resource. html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1& format=PDF, 2021. [41] T. Computer. Redpajama: An open source recipe to reproduce llama training dataset, 2023. URL https://github.com/togethercomputer/RedPajama-Data.
2306.11698#378
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
379
[42] M. Côté, Á. Kádár, X. Yuan, B. Kybartas, T. Barnes, E. Fine, J. Moore, M. J. Hausknecht, L. E. Asri, M. Adada, W. Tay, and A. Trischler. Textworld: A learning environment for text- based games. In Computer Games - 7th Workshop, CGW, Held in Conjunction with the 27th International Conference on Artificial Intelligence, IJCAI, volume 1017 of Communications in Computer and Information Science, pages 41–75. Springer, 2018. [43] G. Cui, L. Yuan, B. He, Y. Chen, Z. Liu, and M. Sun. A unified evaluation of textual backdoor learning: Frameworks and benchmarks. arXiv preprint arXiv:2206.08514, 2022. [44] Cybernews. Lessons learned from chatgpt’s samsung leak, 2023. URL https://cybernews. com/security/chatgpt-samsung-leak-explained-lessons/. [45] J. Dai, C. Chen, and Y. Li. A backdoor attack against lstm-based text classification systems. IEEE Access, 7:138872–138878, 2019.
2306.11698#379
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
380
[46] L. Daryanani. How to jailbreak chatgpt. https://watcher.guru/news/ how-to-jailbreak-chatgpt. [47] J. Devlin, M. Chang, K. Lee, and K. Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In J. Burstein, C. Doran, and T. Solorio, editors, NAACL-HLT, 2019. [48] J. Dhamala, T. Sun, V. Kumar, S. Krishna, Y. Pruksachatkun, K.-W. Chang, and R. Gupta. Bold: Dataset and metrics for measuring biases in open-ended language generation. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 862–872, 2021.
2306.11698#380
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
381
[49] K. D. Dhole, V. Gangal, S. Gehrmann, A. Gupta, Z. Li, S. Mahamood, A. Mahendiran, S. Mille, A. Srivastava, S. Tan, T. Wu, J. Sohl-Dickstein, J. D. Choi, E. Hovy, O. Dusek, S. Ruder, S. Anand, N. Aneja, R. Banjade, L. Barthe, H. Behnke, I. Berlot-Attwell, C. Boyle, C. Brun, M. A. S. Cabezudo, S. Cahyawijaya, E. Chapuis, W. Che, M. Choudhary, C. Clauss, P. Colombo, F. Cornell, G. Dagan, M. Das, T. Dixit, T. Dopierre, P.-A. Dray, S. Dubey, T. Ekeinhor, M. D. Giovanni, R. Gupta, R. Gupta, L. Hamla, S. Han, F. Harel-Canada, A. Honore, I. Jindal, P. K. Joniak, D. Kleyko, V.
2306.11698#381
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
382
Hamla, S. Han, F. Harel-Canada, A. Honore, I. Jindal, P. K. Joniak, D. Kleyko, V. Kovatchev, K. Krishna, A. Kumar, S. Langer, S. R. Lee, C. J. Levinson, H. Liang, K. Liang, Z. Liu, A. Lukyanenko, V. Marivate, G. de Melo, S. Meoni, M. Meyer, A. Mir, N. S. Moosavi, N. Muennighoff, T. S. H. Mun, K. Murray, M. Namysl, M. Obedkova, P. Oli, N. Pasricha, J. Pfister, R. Plant, V. Prabhu, V. Pais, L. Qin, S. Raji, P. K. Rajpoot, V. Raunak, R. Rinberg, N. Roberts, J. D. Rodriguez, C. Roux, V. P. H. S., A. B. Sai, R. M. Schmidt, T. Scialom, T. Sefara, S. N. Shamsi, X. Shen, H. Shi,
2306.11698#382
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
383
Sai, R. M. Schmidt, T. Scialom, T. Sefara, S. N. Shamsi, X. Shen, H. Shi, Y. Shi, A. Shvets, N. Siegel, D. Sileo, J. Simon, C. Singh, R. Sitelew, P. Soni, T. Sorensen, W. Soto, A. Srivastava, K. A. Srivatsa, T. Sun, M. V. T, A. Tabassum, F. A. Tan, R. Teehan, M. Tiwari, M. Tolkiehn, A. Wang, Z. Wang, G. Wang, Z. J. Wang, F. Wei, B. Wilie, G. I. Winata, X. Wu, W. Wydma´nski, T. Xie, U. Yaseen, M. Yee, J. Zhang, and Y. Zhang. Nl-augmenter: A framework for task-sensitive natural language augmentation, 2021.
2306.11698#383
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
384
[50] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023. [51] H. Duan, A. Dziedzic, N. Papernot, and F. Boenisch. Flocks of stochastic parrots: Differentially private prompt learning for large language models. arXiv preprint arXiv:2305.15594, 2023. [52] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214– 226, 2012. 65 [53] C. Dwork, A. Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014.
2306.11698#384
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
385
[54] D. Emelin, R. L. Bras, J. D. Hwang, M. Forbes, and Y. Choi. Moral stories: Situated reasoning about norms, intents, actions, and their consequences. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 698–718. Association for Computational Linguistics, 2021. [55] A. Fan, M. Lewis, and Y. Dauphin. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1082. URL https://aclanthology.org/P18-1082.
2306.11698#385
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
386
[56] A. Fisch, A. Talmor, R. Jia, M. Seo, E. Choi, and D. Chen. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 1–13, Hong Kong, China, Nov. 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-5801. URL https: //aclanthology.org/D19-5801. [57] L. Floridi, M. Holweg, M. Taddeo, J. Amaya Silva, J. Mökander, and Y. Wen. Capai-a procedure for conducting conformity assessment of ai systems in line with the eu artificial intelligence act. Available at SSRN 4064091, 2022. [58] M. Forbes, J. D. Hwang, V. Shwartz, M. Sap, and Y. Choi. Social chemistry 101: Learn- ing to reason about social and moral norms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 653–670. Association for Computational Linguistics, 2020.
2306.11698#386
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
387
[59] D. Ganguli, A. Askell, N. Schiefer, T. I. Liao, K. Lukoši¯ut˙e, A. Chen, A. Goldie, A. Mirhoseini, C. Olsson, D. Hernandez, D. Drain, D. Li, E. Tran-Johnson, E. Perez, J. Kernion, J. Kerr, J. Mueller, J. Landau, K. Ndousse, K. Nguyen, L. Lovitt, M. Sellitto, N. Elhage, N. Mercado, N. DasSarma, O. Rausch, R. Lasenby, R. Larson, S. Ringer, S. Kundu, S. Kadavath, S. Johnston, S. Kravec, S. E. Showk, T. Lanham, T. Telleen-Lawton, T. Henighan, T. Hume, Y. Bai, Z. Hatfield-Dodds, B. Mann, D. Amodei, N. Joseph, S. McCandlish, T. Brown, C. Olah, J. Clark, S. R. Bowman, and J. Kaplan. The capacity for moral self-correction in large language models, 2023.
2306.11698#387
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
388
[60] L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite, N. Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. [61] T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. Daumé III, and K. Crawford. Datasheets for datasets. arXiv preprint arXiv:1803.09010, 2018. [62] S. Gehman, S. Gururangan, M. Sap, Y. Choi, and N. A. Smith. RealToxicityPrompts: Evaluat- ing neural toxic degeneration in language models. In Findings in EMNLP, 2020.
2306.11698#388
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
389
[63] A. Gentile, S. Boca, and I. Giammusso. ‘you play like a woman!’ effects of gender stereotype threat on women’s performance in physical and sport activities: A meta-analysis. Psychology of Sport and Exercise, 39:95–103, 2018. ISSN 1469-0292. doi: https://doi.org/10.1016/j. psychsport.2018.07.013. URL https://www.sciencedirect.com/science/article/ pii/S1469029217305083. [64] K. Goel, N. Rajani, J. Vig, S. Tan, J. Wu, S. Zheng, C. Xiong, M. Bansal, and C. Ré. Robustness gym: Unifying the nlp evaluation landscape. arXiv preprint arXiv:2101.04840, 2021. [65] A. Gokaslan and V. Cohen. Openwebtext corpus. http://Skylion007.github.io/ OpenWebTextCorpus, 2019.
2306.11698#389
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
390
OpenWebTextCorpus, 2019. [66] R. Goodside. Exploiting gpt-3 prompts with malicious inputs that order the model to ig- nore its previous directions. https://web.archive.org/web/20220919192024/https: //twitter.com/goodside/status/1569128808308957185. [67] K. Greshake, S. Abdelnabi, S. Mishra, C. Endres, T. Holz, and M. Fritz. More than you’ve asked for: A comprehensive analysis of novel prompt injection threats to application-integrated large language models. CoRR, abs/2302.12173, 2023. [68] T. Gui, X. Wang, Q. Zhang, Q. Liu, Y. Zou, X. Zhou, R. Zheng, C. Zhang, Q. Wu, J. Ye, et al. Textflint: Unified multilingual robustness evaluation toolkit for natural language processing. arXiv preprint arXiv:2103.11441, 2021. 66
2306.11698#390
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
391
66 [69] M. Hardt, E. Price, E. Price, and N. Srebro. Equality of opportunity in supervised learning. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. URL https://proceedings.neurips.cc/paper_files/paper/2016/file/ 9d2682367c3935defcb1f9e247a97c0d-Paper.pdf. [70] W. Hariri. Unlocking the potential of chatgpt: A comprehensive exploration of its applications, advantages, limitations, and future directions in natural language processing. arXiv preprint arXiv:2304.02017, 2023. [71] M. J. Hausknecht, P. Ammanabrolu, M. Côté, and X. Yuan. Interactive fiction games: A colossal adventure. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI, pages 7903–7910. AAAI Press, 2020.
2306.11698#391
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
392
[72] D. Hendrycks, X. Liu, E. Wallace, A. Dziedzic, R. Krishnan, and D. Song. Pretrained transformers improve out-of-distribution robustness. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2744–2751, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.244. URL https://aclanthology.org/2020.acl-main.244. [73] D. Hendrycks, C. Burns, S. Basart, A. Critch, J. Li, D. Song, and J. Steinhardt. Aligning AI with shared human values. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. [74] D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt. Mea- suring massive multitask language understanding. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=d7KBjmI3GmQ.
2306.11698#392
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
393
[75] D. Hendrycks, M. Mazeika, A. Zou, S. Patel, C. Zhu, J. Navarro, D. Song, B. Li, and J. Stein- hardt. What would jiminy cricket do? towards agents that behave morally. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual, 2021. [76] A. Holtzman, J. Buys, L. Du, M. Forbes, and Y. Choi. The curious case of neural text degeneration. In ICLR, 2019. Immunity to popular stereotypes of aging? seniors and stereotype threat. Educational Gerontology, 36(5):353–371, 2010. doi: 10.1080/03601270903323976. URL https://doi.org/10.1080/03601270903323976. [78] J. Huang, H. Shao, and K. C.-C. Chang. Are large pre-trained language models leaking your personal information? EMNLP Findings, 2022.
2306.11698#393
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
394
[79] M. Iyyer, J. Wieting, K. Gimpel, and L. Zettlemoyer. Adversarial example generation with syntactically controlled paraphrase networks. In M. A. Walker, H. Ji, and A. Stent, edi- tors, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1875–1885. Association for Computational Linguistics, 2018. doi: 10.18653/v1/n18-1170. URL https://doi.org/10.18653/v1/n18-1170. [80] R. Jia and P. Liang. Adversarial examples for evaluating reading comprehension systems. In M. Palmer, R. Hwa, and S. Riedel, editors, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2021–2031. Association for Computational Linguistics, 2017. doi: 10. 18653/v1/d17-1215. URL https://doi.org/10.18653/v1/d17-1215.
2306.11698#394
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
395
[81] D. Jin, Z. Jin, J. T. Zhou, and P. Szolovits. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. In AAAI, 2020. [82] Z. Jin, S. Levine, F. G. Adauto, O. Kamal, M. Sap, M. Sachan, R. Mihalcea, J. Tenenbaum, and B. Schölkopf. When to make exceptions: Exploring language models as accounts of human moral judgment. In NeurIPS, 2022. [83] N. Kandpal, E. Wallace, and C. Raffel. Deduplicating training data mitigates privacy risks in language models. In International Conference on Machine Learning, pages 10697–10707. PMLR, 2022. 67 [84] D. Kang, X. Li, I. Stoica, C. Guestrin, M. Zaharia, and T. Hashimoto. Exploiting programmatic behavior of llms: Dual-use through standard security attacks. CoRR, abs/2302.05733, 2023.
2306.11698#395
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
396
[85] M. Kang, L. Li, M. Weber, Y. Liu, C. Zhang, and B. Li. Certifying some distributional fairness with subpopulation decomposition. Advances in Neural Information Processing Systems, 35: 31045–31058, 2022. [86] J. Kasai, K. Sakaguchi, Y. Takahashi, R. L. Bras, A. Asai, X. Yu, D. Radev, N. A. Smith, arXiv preprint Y. Choi, and K. Inui. Realtime qa: What’s the answer right now? arXiv:2207.13332, 2022. [87] D. Kaushik, E. Hovy, and Z. Lipton. Learning the difference that makes a difference with counterfactually-augmented data. In International Conference on Learning Representations, 2019. [88] M. Keevak. 204How Did East Asians Become Yellow? Reconsidering Race: Social Science Perspectives on Racial Categories in the Age of Genomics. Oxford University Press, 06 2018. ISBN 9780190465285. doi: 10.1093/oso/9780190465285. 003.0011. URL https://doi.org/10.1093/oso/9780190465285.003.0011. In
2306.11698#396
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
397
[89] F. Khani and P. Liang. Feature noise induces loss discrepancy across groups. International Conference On Machine Learning, 2019. [90] J. Kim, H. J. Kim, H. Cho, H. Jo, S.-W. Lee, S.-g. Lee, K. M. Yoo, and T. Kim. Ground-truth labels matter: A deeper look into input-label demonstrations. arXiv preprint arXiv:2205.12685, 2022. [91] B. Klimt and Y. Yang. The enron corpus: A new dataset for email classification research. In Machine Learning: ECML 2004: 15th European Conference on Machine Learning, Pisa, Italy, September 20-24, 2004. Proceedings 15, pages 217–226. Springer, 2004.
2306.11698#397
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
398
[92] P. W. Koh, S. Sagawa, H. Marklund, S. M. Xie, M. Zhang, A. Balsubramani, W. Hu, M. Ya- sunaga, R. L. Phillips, I. Gao, T. Lee, E. David, I. Stavness, W. Guo, B. Earnshaw, I. S. Haque, S. M. Beery, J. Leskovec, A. Kundaje, E. Pierson, S. Levine, C. Finn, and P. Liang. WILDS: A benchmark of in-the-wild distribution shifts. In M. Meila and T. Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 5637–5664. PMLR, 2021. URL http://proceedings.mlr.press/v139/koh21a.html. [93] T. Kojima, S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa. Large language models are zero-shot reasoners. Neural Information Processing Systems, 2022.
2306.11698#398
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
399
[94] K. Krishna, J. Wieting, and M. Iyyer. Reformulating unsupervised style transfer as para- phrase generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 737–762, Online, Nov. 2020. Association for Compu- tational Linguistics. doi: 10.18653/v1/2020.emnlp-main.55. URL https://aclanthology. org/2020.emnlp-main.55. [95] M. J. Kusner, J. Loftus, C. Russell, and R. Silva. Counterfactual fairness. Advances in neural information processing systems, 30, 2017. [96] H. Kwon. Dual-targeted textfooler attack on text classification systems. IEEE Access, 11: 15164–15173, 2023. doi: 10.1109/ACCESS.2021.3121366. URL https://doi.org/10. 1109/ACCESS.2021.3121366. [97] Learn Prompting. Introduction to prompt hacking. https://learnprompting.org/docs/ prompt_hacking/intro, 2023.
2306.11698#399
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
400
[97] Learn Prompting. Introduction to prompt hacking. https://learnprompting.org/docs/ prompt_hacking/intro, 2023. [98] K. Lee, D. Ippolito, A. Nystrom, C. Zhang, D. Eck, C. Callison-Burch, and N. Carlini. Deduplicating training data makes language models better. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8424–8445, 2022. [99] A. Lees, V. Q. Tran, Y. Tay, J. S. Sorensen, J. Gupta, D. Metzler, and L. Vasserman. A new generation of perspective api: Efficient multilingual character-level transformers. Knowledge Discovery And Data Mining, 2022. doi: 10.1145/3534678.3539147. [100] H. Li, D. Guo, W. Fan, M. Xu, and Y. Song. Multi-step jailbreaking privacy attacks on chatgpt. arXiv preprint arXiv:2304.05197, 2023. 68
2306.11698#400
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
401
68 Textbugger: Generating adversarial text against real-world applications. In 26th Annual Network and Distributed System Security Symposium, NDSS 2019, San Diego, California, USA, February 24-27, 2019. The Internet Society, 2019. URL https://www.ndss-symposium.org/ndss-paper/ textbugger-generating-adversarial-text-against-real-world-applications/. [102] L. Li, R. Ma, Q. Guo, X. Xue, and X. Qiu. BERT-ATTACK: adversarial attack against BERT using BERT. In B. Webber, T. Cohn, Y. He, and Y. Liu, editors, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6193–6202. Association for Computational Linguistics, 2020. doi: 10.18653/v1/2020.emnlp-main.500. URL https://doi.org/10.18653/v1/ 2020.emnlp-main.500.
2306.11698#401
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]
2306.11698
402
[103] T. Li, D. Khashabi, T. Khot, A. Sabharwal, and V. Srikumar. UNQOVERing stereotyping biases via underspecified questions. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3475–3489, Online, Nov. 2020. Association for Computational Linguis- tics. doi: 10.18653/v1/2020.findings-emnlp.311. URL https://aclanthology.org/2020. findings-emnlp.311. [104] X. Li, F. Tramer, P. Liang, and T. Hashimoto. Large language models can be strong differen- tially private learners. arXiv preprint arXiv:2110.05679, 2021. [105] Y. Li and Y. Zhang. Fairness of chatgpt. arXiv preprint arXiv:2305.18569, 2023. [106] P. Liang, R. Bommasani, T. Lee, D. Tsipras, D. Soylu, M. Yasunaga, Y. Zhang, D. Narayanan, arXiv preprint Y. Wu, A. Kumar, et al. Holistic evaluation of language models. arXiv:2211.09110, 2022.
2306.11698#402
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives -- including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT models can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more vulnerable given jailbreaking system or user prompts, potentially because GPT-4 follows (misleading) instructions more precisely. Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps. Our benchmark is publicly available at https://decodingtrust.github.io/; our dataset can be previewed at https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of this work is at https://openreview.net/pdf?id=kaHpo8OZw2.
http://arxiv.org/pdf/2306.11698
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
cs.CL, cs.AI, cs.CR
NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track)
null
cs.CL
20230620
20240105
[ { "id": "2302.13971" }, { "id": "2302.00539" }, { "id": "2302.12095" }, { "id": "2306.04618" }, { "id": "2302.04237" }, { "id": "2305.01639" }, { "id": "2305.18569" }, { "id": "2302.10198" }, { "id": "2304.02017" }, { "id": "2302.07257" }, { "id": "2206.07682" }, { "id": "2305.15594" }, { "id": "2212.06470" }, { "id": "2304.05197" }, { "id": "2301.12867" }, { "id": "2303.03378" }, { "id": "2010.04053" }, { "id": "2211.09110" }, { "id": "2206.08514" }, { "id": "2210.03057" }, { "id": "2305.10646" }, { "id": "2210.09261" }, { "id": "2206.04615" }, { "id": "2101.06804" }, { "id": "2207.13332" }, { "id": "2103.11441" }, { "id": "2305.12707" }, { "id": "2212.10560" }, { "id": "2304.01852" }, { "id": "2304.15004" }, { "id": "2211.08073" }, { "id": "2101.00027" }, { "id": "2110.05679" }, { "id": "2112.12938" }, { "id": "1803.09010" }, { "id": "2305.14950" }, { "id": "2306.04528" }, { "id": "2303.12712" }, { "id": "2210.11528" }, { "id": "2301.13188" }, { "id": "2303.03846" }, { "id": "2205.12685" }, { "id": "2303.13375" }, { "id": "2101.04840" }, { "id": "2302.13439" } ]