doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.11489 | 25 | # D. Pros and Cons of LLMs
A proliferation of benchmarks and tasks has been leveraged to evaluate the effectiveness and superiority of LLMs. Re- sults from corresponding experiments demonstrate that LLMs achieve much better performance than previous deep learning models and smaller PLMs on a variety of NLP tasks. Besides, LLMs exhibit some emergent abilities and are capable of solv- ing some complex tasks that traditional models and smaller PLMs cannot address. In summary, LLMs have the following superior characteristics.
Zero-shot Learning. LLMs outperform other models with zero-shot learning on most tasks and even perform better than fine-tuned models on some tasks. An empirical study [15] has shown that ChatGPT outperforms previous models with zero-shot learning on 9 of 13 datasets and even outperforms fully fine-tuned task-specific models on 4 tasks. This superior performance is attributed to the rich and diverse input data as well as the large parameter scale of LLMs, which allow them | 2306.11489#25 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 25 | 6
08 : 08 0.6 0.6 0.4 od 0.4 0.2 2 = o #§ «© © &@ &@ & BF @f 0.0 x x os? oa at? gat gone nt pr joa? 0.2 bad toxic harmful
(a) Toxicity distribution of different models. (b) Toxicity distribution of different prompt templates. Figure 2: Toxicity results of different models and different prompt templates.
Table 3: Average toxicity score (â) of eight LLMs. The terms "Bad," "Toxic," and "Harmful" represent three types of prompt templates, while "good," "bad," and "normal" represent different social norms. The lowest score is highlighted in green, whereas the highest score is indicated in red. | 2306.11507#25 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 25 | For both pretraining and finetuning, we concatenate our respective datasets into a single dimensional array with ââ¨â£endoftextâ£â©â token used for separating the files. We train our models on sequence length of 2048 sliced from our dataset array with next-token prediction loss. We use fp16 training with AdamW optimizer, linear-warmup-linear-decay learning rate schedule, and attention and residual dropout of 0.1. We train on 8 Nvidia-A100 GPUs using deepspeed. Our pretrained base model phi-1-base was obtained in under 4 days of training. Finetuning to obtain phi-1 used an additional 7 hours on the same hardware.
Pretraining. phi-1-base was trained on the CodeTextbook dataset (filtered code-language corpus and synthetic textbooks). We use effective batch size 1024 (including data parallelism and gradient accumulation), maximum learning rate 1e-3 with warmup over 750 steps, and weight decay 0.1, for a total of 36,000 steps. We use the checkpoint at 24,000 steps as our phi-1-base â this is equivalent to â¼ 8 epochs on our CodeTextbook dataset for a total of little over 50B total training tokens. Despite the small size and computation, this model already achieves a 29% accuracy on HumanEval. | 2306.11644#25 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 25 | and measure the average likelihood of the models agreeing with the given stereotype statements, which indicates of the bias of the model. We curate and divide 24 demographic groups varying across seven demographic factors, such as gender/sexual orientation, age, and race, into two equal halves (stereotyped and non-stereotyped), and select 16 stereotype topics (e.g., immigration, drug addiction, leadership skills, etc.) that affect the stereotyped groups. We construct three evaluation scenarios: 1) evaluation on vanilla benign system prompts that do not affect the answer of the models to get a baseline measurement of the modelsâ bias against the selected demographic groups; 2) evaluation on designed system prompts that only guide the model to overcome its content policy restrictions, but do not influence it to be biased against any particular demographic group (referred to as untargeted system prompt), 3) evaluation on designed system prompts that not only guide the model to overcome its content policy restrictions but also instruct the models to be biased against the chosen demographic groups (referred to as targeted system prompt) to evaluate the resilience of the models under misleading system prompts. ⢠| 2306.11698#25 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 26 | As there has been almost no literature systematically discussing prompt engineering in Chemistry, and the fact that this field is relatively new, we provide a comprehensive step-by-step ChemPrompt Engineering guide for beginners to start with, including numerous chemistry-related examples in the Supporting Information, Section S2. At present, everyone is at the same starting point, and no one possesses exclusive expertise in this area. It is our hope that this work will stimulate the development of more powerful prompt engineering skills and help every chemist quickly understand the art of ChemPrompt Engineering, thereby advancing the field of chemistry at large.
As there has been almost no literature systematically discussing prompt engineering in Chemistry, and the fact that this field is relatively new, we provide a comprehensive step-by-step ChemPrompt Engineering guide for beginners to start with, including numerous chemistry-related examples in the Supporting Information, Section S2. At present, everyone is at the same starting point, and no one possesses exclusive expertise in this area. It is our hope that this work will stimulate the development of more powerful prompt engineering skills and help every chemist quickly understand the art of ChemPrompt Engineering, thereby advancing the field of chemistry at large. | 2306.11296#26 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 26 | Step-by-step Reasoning. By utilizing the chain-of-thought prompting strategy, LLMs can successfully complete some complex tasks, including arithmetic reasoning, commonsense reasoning, and symbolic reasoning. Such tasks are typically beyond the capability of smaller PLMs. The chain-of-thought is an improved prompting strategy, which integrates intermedi- ate reasoning steps into the prompts to boost the performance of LLMs on complex reasoning tasks. Besides, the step-by- step reasoning ability is believed to be potentially acquired through training LLMs on well-structured code data [54].
Instruction Following. Instruction tuning is a unique fine- tuning approach that fine-tunes LLMs on a collection of natural language formatted instances. With this approach, LLMs are enabled to perform well on previously unseen tasks described through natural language instructions without relying on explicit examples [49]. For example, Wei et al. [57] fine-tuned a 137B parameter LLM on over 60 datasets based on instruction tuning and tested it on unseen task types. The experimental results demonstrated that the instruction-tuned model significantly outperformed its unmodified counterpart and zero-shot GPT-3.
5
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 | 2306.11489#26 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 26 | Model Bad Toxic Harmful Good Bad Normal Good Bad Normal Good Bad Normal ChatGPT 0.100 ChatGLM 0.045 0.102 0.055 0.061 0.058 0.026 0.336 Vicuna Koala LLaMa Oasst Alpaca FastChat 0.139 0.058 0.107 0.098 0.082 0.083 0.077 0.371 0.125 0.118 0.123 0.082 0.064 0.077 0.041 0.271 0.254 0.061 0.225 0.060 0.049 0.071 0.039 0.384 0.334 0.074 0.191 0.141 0.113 0.123 0.058 0.350 0.253 0.096 0.199 0.095 0.077 0.086 0.044 0.323 0.191 0.020 0.141 0.102 0.092 0.094 0.028 0.184 0.193 0.040 0.190 0.154 0.111 0.085 0.055 0.246 0.172 0.096 0.144 0.134 0.063 0.091 0.038 0.205 | 2306.11507#26 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 26 | Finetuning. phi-1 is obtained by finetuning phi-1-base on the CodeExercises dataset. For finetuning, we use the same setup as pretraining, but different hyperparameters: we use effective batchsize of 256, maximum learning rate 1e-4 with 50 steps of warmup, and weight decay 0.01. We train for total of 6,000 steps and pick the best checkpoint (saved every 1000 steps).
# 3 Spikes of model capability after finetuning on CodeExercises
Figure 2.1 showed that the largest improvement in HumanEval resulted from finetuning on the small CodeExercises dataset (<200M tokens). CodeExercises consist exclusively of short Python tasks using only basic Python libraries. In this section, we demonstrate that, quite remarkably the model after finetuning also exhibits a substantial improvement in executing tasks that are not featured in the finetuning dataset. This includes managing intricate algorithmic tasks and using external libraries. This suggests that our finetuning process might have helped the model in reorganizing and consolidating the knowledge acquired during pretraining, even if such knowledge is not explicitly present in our CodeExercises dataset. In this section we will focus on qualitatively comparing and contrasting the capabilities of our finetuned model phi-1 and its pretrained 1.3B parameter base model phi-1-base. | 2306.11644#26 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 26 | models to be biased against the chosen demographic groups (referred to as targeted system prompt) to evaluate the resilience of the models under misleading system prompts. ⢠Adversarial Robustness. To evaluate the robustness of GPT-3.5 and GPT-4 on textual adversarial attacks, we construct three evaluation scenarios: 1) evaluation on the standard benchmark AdvGLUE [176] with a vanilla task description, aiming to assess: a) the vulnerabilities of GPT models to existing textual adversarial attacks, b) the robustness of different GPT models in comparison to state-of-the-art models on the standard AdvGLUE benchmark, c) the impact of adversarial attacks on their instruction-following abilities (measured by the rate at which the model refuses to answer a question or hallucinates a nonexistent answer when it is under attack), and d) the transferability of current attack strategies (quantified by the transferability attack success rates of different attack approaches); 2) evaluation on the AdvGLUE benchmark given different instructive task descriptions and designed system prompts, so as to investigate the resilience of models under diverse (adversarial) task descriptions and system prompts; 3) evaluation of | 2306.11698#26 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 27 | . One revolutionary aspect of ChatGPT is its specialized domain knowledge due to its extensive pre-trained text corpus, which enables an understanding of chemical nomenclature and reac- tion conditions. In contrast to traditional NLP methods, ChatGPT requires no additional training for named entity recogni- tion, and can readily identify inorganic metal sources, organic linkers, solvents, and other compounds within a given experi- N,N mental text. Another notable feature is ChatGPT's ability to recognize and associate compound abbreviations (e.g., DMF) with their full names ( -dimethylformamide) within the context of MOF synthesis (Supporting Information, Figure S5). This capability is crucial as the use of different abbreviations for the same compound can inflate the number of âunique com- poundsâ in the dataset post text mining, leading to redundancy without providing new information. This challenge is difficult to address using traditional NLP methods or packages, as no model can inherently discern that DMF and -dimethylforma- mide are the same compound without a manually curated dictionary of chemical abbreviations. Although ChatGPT may not cover all abbreviations, its | 2306.11296#27 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 27 | 5
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021
Human Alignment. LLMs can be trained to generate high- quality, harmless responses that align with human values through the technique of RLHF, which involves incorporating humans into the training loop using carefully designed labeling strategies. RLHF comprises three steps: 1) collecting a labeled dataset consisting of input prompts and target outputs to fine- tune LLMs in a supervised way; 2) training a reward model on the assembled data, and 3) optimizing LLMs by formulating its optimization as a reinforcement learning problem. With this approach, LLMs are enabled to generate appropriate outputs that adhere to human expectations.
Tools Manipulation. Traditional PLMs are trained on plain text data, which limits their ability to solve non-textual tasks. Besides, their abilities are limited by the pre-training cor- pus, and cannot effectively solve tasks requiring real-time knowledge. In response to these limitations, recent LLMs are developed with the ability to manipulate external tools such as search engine, calculator, and compiler to enhance their performance in specialized domains [58]. More recently, the plugin mechanism has been supported in LLMs, providing an avenue for implementing novel functions. This mechanism has significantly broadened the range of capacities for LLMs, making them more flexible and adaptable to diverse tasks. | 2306.11489#27 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 27 | Based on the analysis of Figure 2a, it is evident that the toxicity distributions of the different models vary signiï¬cantly. Notably, FastChat demonstrates the most pronounced toxicity, with a considerable portion of the text surpassing toxicity scores of 0.6. ChatGPT and Vicuna closely follow, exhibiting comparatively higher overall toxicity levels compared to other models. The remaining models generally exhibit toxicity values below 0.4, indicating their limited ability to generate highly toxic content even under extreme prompt templates. Figure 2b reveals that the three different prompt templates yield similar levels of toxicity, suggesting that the impact of distinct prompt templates on toxicity is not substantial. However, in terms of high toxicity distribution, the toxic prompt exhibits a denser distribution, while the harmful prompt appears to be more sparse. Table 3 provides an overview of the average toxicity scores across different models. In terms of different types of norms, we observed that content generated by LLMs tends to have higher toxicity of normal and bad norms compared to the toxicity of good norms. When considering different models, FastChat emerges as the model with the highest overall toxicity in both the bad | 2306.11507#27 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 27 | # 3.1 Finetuning improves the modelâs understanding
Using a simple Python function that we created ourselves, we observe below that the model shows a much higher level of understanding and compliance with the instructions after finetuning. In particular, we see
7
that phi-1-base struggles with the logical relationships in the prompts, while phi-1 can interpret the question and generate the answer correctly. In this example, even our 350M phi-1-small model shows some level of understanding of the problem even though the resulting solution is wrong. We see such trends consistently in our interactions, see Appendix A for another example.
Prompt: """ Alice, Bob and Charles are playing games.
Alice first pick a random number from 333 to 678, Bob then pick a number starting from Aliceâs number to 888, Charles then pick a number starting from 123 to Bobâs number. Alice gets one point if Aliceâs number minus Charlesâs number is divisible by the floor of the square root of Bobâs number, otherwise Bob gets one point. Simulate Aliceâs and Bobâs points in 30 iterations.
""" phi-1
# phi-1-base
# phi-1-small | 2306.11644#27 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 27 | designed system prompts, so as to investigate the resilience of models under diverse (adversarial) task descriptions and system prompts; 3) evaluation of GPT-3.5 and GPT-4 on our generated challeng- ing adversarial texts AdvGLUE++ against open-source autoregressive models such as Alpaca-7B, Vicuna-13B, and StableVicuna-13B in different settings to further evaluate the vulnerabilities of GPT-3.5 and GPT-4 under strong adversarial attacks in diverse settings. ⢠Out-of-Distribution Robustness. To evaluate the robustness of GPT models against out-of- distribution (OOD) data, we construct three evaluation scenarios: 1) evaluation on inputs that deviate from common training text styles, with the goal of assessing the model robustness under diverse style transformations (e.g., Shakespearean style); 2) evaluation on questions relevant to recent events that go beyond the period when the training data was collected for GPT models, with the goal of measuring the model reliability against unexpected, out-of-scope queries (e.g., whether the model knows to refuse to answer unknown questions); 3) evaluation by adding demonstrations with different | 2306.11698#27 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 28 | mide are the same compound without a manually curated dictionary of chemical abbreviations. Although ChatGPT may not cover all abbreviations, its proficiency in identifying and associating the most common ones such as DEF, DI water, EtOH, and 3 CH CN with their full names enhances data consistency and reduces redundancy. This, in turn, facilitates data retrieval and analysis, ensuring that different names of the same compound are treated as a single entity with its unique chemical identity and information. | 2306.11296#28 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 28 | Although LLMs have made significant progress in natural language understanding and human-like content generation, they still have the following limitations and challenges [49]. Unstructured Generation. LLMs commonly rely on nat- ural language prompts or instructions to generate text under specific conditions. This mechanism presents challenges for precisely constraining the generated outputs according to fine- grained or structural criteria. Ensuring specific text structures, such as the logical order of concepts throughout the entire text, can be difficult. This difficulty is amplified for tasks requiring formal rules or grammar. This is because LLMs mainly focus on the local context information of words and sentences during pre-training, while ignoring global syntactic and structural knowledge. A proposal for addressing this problem is to adopt an iterative prompting approach in generating text [59], mimicking the process of human writing. In contrast, KGs offer a structured summary and emphasize the correlation of relevant concepts when complex events involving the same entity extend across multiple sentences [60], thus enhancing the process of structured text generation. | 2306.11489#28 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 28 | normal and bad norms compared to the toxicity of good norms. When considering different models, FastChat emerges as the model with the highest overall toxicity in both the bad and toxic prompt templates, aligning with the results shown in Figure 2a, which highlights the pressing need for further toxicity mitigation measures. On the other hand, it is worth noting that Alpaca exhibits the lowest toxicity among the models. Other models display relatively low toxicity scores across most prompts, but caution is still advised as they may generate harmful content in certain cases (as shown in Appendix 6.9). | 2306.11507#28 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 28 | points = [0, 0] for i in range(30): alice_num = random.randint (333, 678) bob_num = random.randint( alice_num, 888) charles_num = random.randint (123, bob_num) if (alice_num - charles_num) % int(bob_num ** 0.5) == 0: points[0] += 1 else: self.points = 0 self.alice = 333 self.bob = 678 self.charles = 888 self.bob_points = 0 self.alice_points = 0 self.iterations = 30 self.alice_turn = True self.bob_turn = True self.alice_turn_points = 0 self.bob_turn_points = 0 self.alice_turn_points_history = indices = [] import random alice_points = 0 bob_points = 0 for i in range(30): alice_num = random.randint (333, 678) bob_num = random.randint(123, bob_num) if (alice_num - bob_num) % math.floor(math.sqrt(bob_num )) == 0: points[1] += 1 [] alice_points += 1 return f"Alice: {points[0]}, | 2306.11644#28 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 28 | unexpected, out-of-scope queries (e.g., whether the model knows to refuse to answer unknown questions); 3) evaluation by adding demonstrations with different OOD styles and domains via in-context learning, with the goal of investigating how OOD demonstrations affect the model performance. ⢠Robustness to Adversarial Demonstrations. GPT models have shown great in-context learning capability, which allows the model to make predictions for unseen inputs or tasks based on a few demonstrations without needing to update parameters. We aim to evaluate the robustness of GPT models given misleading or adversarial demonstrations to assess the potential misuse and limitations of in-context learning. We construct three evaluation scenarios: 1) evaluation with counterfactual examples as demonstrations, 2) evaluation with spurious correlations in the demonstrations, and 3) adding backdoors in the demonstrations, with the goal of evaluating if the manipulated demonstrations from different perspectives would mislead GPT-3.5 and GPT-4 models. | 2306.11698#28 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 29 | Our first goal is to develop a ChatGPT-based AI assistant that demonstrates high performance in converting a given exper- imental section paragraph into a table containing all synthesis parameters (Supporting Information, Figure S22). To design the prompt for this purpose, we incorporate the three principles discussed earlier into ChemPrompt Engineering (Figure 2). The rationale for using tabulation as the output for synthesis condition summarization is that the tabular format simplifies subsequent data sorting, analysis, and storage. In terms of the choice of 11 synthesis parameters, we include those deemed most important and non-negligible for each MOF synthesis. Specifically, these parameters encompass metal sources and quantities, dictating metal centers in the framework and their relative concentrations; the linker and its quantity, which affect connectivity and pore size within the MOF; the modulator and its quantity or volume, which can fine-tune the MOF's structure by impacting the nucleation and growth of the MOF in the reaction; the solvent and its volume, which can influence both the crystallization process and the final MOF structure; and the reaction temperature and duration, which are vital parameters governing the kinetics and thermodynamics of MOF formation | 2306.11296#29 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 29 | Hallucination. When generating factual or knowledge- grounded texts, LLMs may produce content that contradicts existing sources or lack supporting evidence. This challenge widely occurs in existing LLMs and is known as the problem of hallucination, which results in a drop in their performance and poses risks when deploying them for real-world appli- cations. The cause of this issue is related to LLMsâ limited ability to utilize correct internal and external knowledge during task-solving. To alleviate this problem, existing studies have tuning strategies, which incorporate resorted to alignment human feedback to fine-tune LLMs. KGs provide structured and explicit representations of knowledge, which can be dy- namically incorporated to augment LLMs, resulting in more factual rationales and reduced hallucination in generation [61]. Inconsistency. With the help of the chain-of-thought strategy, LLMs are capable of solving some complex reasoning tasks based on step-by-step reasoning. Despite their superior performance, LLMs may at times arrive at the desired answer based on an invalid reasoning path or produce an incorrect answer despite following a correct reasoning process. As a result, inconsistency arises between the derived answer and the | 2306.11489#29 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 29 | Figure 3 demonstrates that the toxicity distribution of the eight models bears a resemblance to a Pois- son distribution [57]. The majority of model out- puts still exhibit minimal toxicity. Notably, Alpaca demonstrates the lowest toxicity, with the majority of its toxicity scores below 0.1. Conversely, FastChat showcases the highest toxicity, with a signiï¬cantly greater distribution of toxicity scores above 0.8 when compared to other models.
14 _ Oren 12 â ChatGPT 1° â Koala eae ° __ anal ; â fastChat 8.0 0.2 04 0.6 08 ~
7
Figure 3: Toxicity density distribution. We utilized Gaussian kernel density estimation [56] to ï¬t the toxicity data of each model and truncated it within the range of 0 to 1.
7.0 | 2306.11507#29 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11296 | 30 | crystallization process and the final MOF structure; and the reaction temperature and duration, which are vital parameters governing the kinetics and thermodynamics of MOF formation in each synthesis. In our prompt, we also account for the fact that some papers may report multiple synthesis conditions for the same compound and instruct ChatGPT to use multiple rows to include each variation. For multiple units of the same synthesis parameters, such as when molarity mass and weight mass are both reported, we encourage ChatGPT to include them in the same cell, separated by a comma, which can be later stream- lined depending on the needs. If any information is not provided in the sections, e.g., most MOF reactions may not involve the use of modulators and some papers may not specify the reaction time, we expect ChatGPT to answer "N/A" for that parameter. Importantly, to eliminate non-MOF synthesis conditions such as organic linker synthesis, post-synthetic modification, or ca- talysis reactions, which are not helpful for studying MOF synthesis reactions, we simply add one line of narrative instruction, 5 asking ChatGPT to ignore these types of reactions and focus solely on MOF synthesis | 2306.11296#30 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 30 | based on an invalid reasoning path or produce an incorrect answer despite following a correct reasoning process. As a result, inconsistency arises between the derived answer and the underlying reasoning process. Additionally, research [62] has revealed that LLMsâ abilities to forecast facts and answer queries are highly influenced by specific prompt templates and related entities. This is because that LLMs rely largely on simple heuristics to make predictions, their generations are correlated with co-occurrence frequencies between the target word and words in the prompt. Moreover, although LLMsâ pre-training process helps them memorize facts, it fails to imbue them with the ability to generalize observed facts, leading to poor inferences. This issue can be partially addressed by introducing external KGs in LLM reasoning. By interactively exploring related entities and relations on KGs and performing reasoning based on the retrieved knowledge, LLMs can have better ability of knowledge traceability and knowledge correctability [63]. | 2306.11489#30 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 30 | 7.0
04, ChatGLM 04 Vicuna 04 Alpaca 04 FastChat Max Slope: 0.0120 Max Slope: 0.0097 [Max Slope: 0.0167] [Max Slope: 0.0187 0.3- 0.3 0.3 0.3 0.2- 0.2 0.2 0.2 O.1- Tim 0.1 0.1 0.1 00 ee al Oe ee BS 00 ee BS Oe eee BS xxx x & & S xx & &. £ xxx x & & S xx Â¥ & &. Si SSS EFS FES SEEK FL SFE SISK FS SL SF ee EK SEF FE SLE EF IY FE SF? F~ Fs ESL FY FES LP FF â e * © é = © e * © e * 04 Koala 04 ChatGPT 04 Oasst 04 LLaMa Max Slope: 0.0138 [Max Slope: 0.0238 Max Slope: 0.0117 [Max Slope: 0.0127 0.3- 0.3 0.3 0.3
Figure 4: Average toxicity score in different groups of each model. Gray represents the GENDER category, blue represents the RACE category, and red represents the RELIGION category. | 2306.11507#30 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 30 | # 3.2 Finetuning improves the modelâs ability to use external libraries
We demonstrate here that finetuning on CodeExercises unexpectedly improves the modelâs ability to use external libraries such as Pygame and Tkinter, eventhough our exercises do not contain these libraries. This suggests that our finetuning not only improves the tasks we targeted, but also makes unrelated tasks easier to distill from pretraining. For reference, Figure 3.1 shows the distribution of package imports in our CodeExercises dataset.
Log Number of Times pandas {ll hashib json request queue inspect textblob Imports
Figure 3.1: The number of imports among â¼880K exercises in CodeExercises (we ignore libraries imported less than 10 times). The plot code is generated by phi-1 with the prompt: âI have a dictionary, first sort the dictionary using the value, from largest to smallest. Then generate a pyplot bar plot. First set font size to be 7, then rotate the x-axis label by 90 degree, the x-axis is the key, y-axis is the value of the dictionary. Use log-scale on y-axis. Moreover, set the y-axis label to be âLog Number of Timesâ and x-axis label to be âImportsâ. Set dpi to be 1000.â
8 | 2306.11644#30 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 30 | Privacy. To evaluate the privacy of GPT models, we construct three evaluation scenarios: 1) evaluating the information extraction accuracy of sensitive information in pretraining data such as the Enron email dataset [91] to evaluate the modelâs memorization problem of training data [31, 152]; 2) evaluating the information extraction accuracy of different types of Personally Identifiable Information (PII) introduced during the inference stage [122]; 3) evaluating the information leakage rates of GPT models when dealing with conversations that involve different types of privacy-related words (e.g., confidentially) and privacy events (e.g., divorce), aiming to study the modelsâ capability of understanding privacy contexts during conversations. ⢠Machine Ethics. To evaluate the ethics of GPT models, we focus on the commonsense moral recognition tasks and construct four evaluation scenarios: 1) evaluation on standard benchmarks ETHICS and Jiminy Cricket, aiming to assess the model performance of moral recognition; 2) evaluation on jailbreaking prompts that are designed to mislead GPT models, aiming to assess the model robustness of moral recognition; 3) evaluation on our generated evasive sentences that are designed to mislead GPT | 2306.11698#30 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11489 | 31 | Limited Reasoning Ability. LLMs have demonstrated decent performance on some basic logical reasoning tasks when provided with question-answer examples. However, they exhibit poor performance on tasks that require the ability to comprehend and utilize supporting evidence for deriving conclusions. While LLMs typically generate valid reasoning steps, they face challenges when multiple candidate steps are deemed valid [64]. This results from LLMs being primed to solely choose the answer with the highest word overlapping with the input question. Additionally, LLMs struggle with predicting entity relationships due to their emphasis on shallow co-occurrence and sequence patterns of words. Moreover, despite exhibiting some basic numerical and symbolic rea- soning abilities [65], LLMs face difficulties in numerical computation, especially for symbols infrequently encountered during pre-training. KGs explicitly capture the relations among concepts, which are essential for reasoning and can be uti- lized to enhance LLMs with structural reasoning capabilities. Previous studies have demonstrated that the integration of textual semantics and structural reasoning yields significant enhancement in the reasoning ability of LLMs [66], [67]. | 2306.11489#31 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 31 | Figure 4: Average toxicity score in different groups of each model. Gray represents the GENDER category, blue represents the RACE category, and red represents the RELIGION category.
Conclusion. Taking into account particular prompt templates, speciï¬c LLMs like ChatGPT and FastChat exhibit a notable tendency to generate content with a substantial level of toxicity. Consequently, the imperative task of detoxifying these models remains of utmost importance.
# 4.2 Bias
Table 4: Std (â) results for 3 group types.
The analysis of bias includes three metrics: average toxicity scores, standard deviations, and results of the Mann-Whitney U test across eight LLMs. The corresponding results are referenced as Figure 4, Table 4, and Figure 5. | 2306.11507#31 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 31 | 8
PyGame example. We start with a PyGame that asks the model to generate code to move a ball:
# Prompt: """ Write a PyGame.
There is a ball. At every iteration, (1). The x-axis of the ball increases by a random integer between (-10, 10), (2). The y-axis of the ball increases by a random integer between (-20, 20). The x-axis of the ball should stay inside 0-400, When the user press âspaceâ, set the
and y-axis of the ball should stay inside 0-600. x-axis of the ball to 200 and y-axis of the ball to 400. """
# phi-1
# phi-1-base
# phi-1-small | 2306.11644#31 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 31 | GPT models, aiming to assess the model robustness of moral recognition; 3) evaluation on our generated evasive sentences that are designed to mislead GPT models, aiming to assess the model robustness of moral recognition under adversarial inputs; 4) evaluation on conditional actions that encompass different attributes (e.g., self-harm vs. harm to others, harm with different levels of severity, etc), aiming to study the conditions under which GPT models will fail in moral recognition. ⢠Fairness. To evaluate the fairness of GPT models, we construct three evaluation scenarios: 1) evaluation of test groups with different base rate parity in zero-shot settings, aiming to explore whether GPT models have large performance gaps across these test groups; 2) evaluation under unfair demographically imbalanced contexts by controlling the base rate parity of examples in few-shot settings, aiming to evaluate the influence that imbalanced contexts have on the fairness of GPT models; 3) evaluation under different numbers of fair demographically balanced examples, aiming to study how the fairness of GPT models is affected by providing more balanced context. Empirical findings. We summarize our empirical findings from different perspectives below. ⢠| 2306.11698#31 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 32 | language-based instruction is highly convenient, requiring no complex and laborious rule-based code to identify unwanted cases and filter them out, and is friendly to researchers without coding experience.
language-based instruction is highly convenient, requiring no complex and laborious rule-based code to identify unwanted cases and filter them out, and is friendly to researchers without coding experience.
The finalized prompts for Process 1 consist of three parts: (i) a request for ChatGPT to summarize and tabulate the reaction conditions, and only use the text or information provided by humans, which adheres to Principle 1 to minimize hallucination; (ii) a specification of the output table's structure, enumerating expectations and handling instructions, which follows Princi- ples 2 and 3 for detailed instructions and structured output requests; and (iii) the context, consisting of MOF synthesis reac- tion condition paragraphs from experimental sections or supporting information in research articles. Note that parts (i) and (ii) are fixed prompts, while part (iii) is considered as "input." The combined prompt results in a single question-and-answer interaction, allowing ChatGPT to generate a summarization of the given synthesis conditions as output. | 2306.11296#32 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 32 | Insufficient Domain Knowledge. Because of the limited availability of domain-specific corpus, LLMs may not per- form as well on domain-specific tasks as on general ones. For instance, while such models generally capture frequent patterns from general texts, generating medical reports, which involve numerous technical terms, may pose a great challenge for LLMs. This limitation suggests that during pre-training, it is difficult for LLMs to acquire sufficient domain knowledge, and injecting additional specialized knowledge may come at the cost of losing previously learned information, given the is- sue of catastrophic forgetting. Therefore, developing effective techniques for knowledge injection is of critical importance to enhance the performance of LLMs on specialized domains. Domain KGs are effective and standardized knowledge bases for specific domains, offering a feasible source for unified domain knowledge. For example, Ding et al. [68] proposed
6
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 | 2306.11489#32 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 32 | RACE GENDER 2.55E-02 ChatGLM 9.47E-04 7.59E-03 8.43E-03 8.88E-03 1.83E-02 1.72E-02 7.71E-03 3.46E-02 1.27E-02 ChatGPT 5.84E-03 4.26E-02 3.69E-03 3.08E-03 1.43E-02 8.44E-04 Model RELIGION 2.56E-02 1.58E-02 5.06E-02 4.73E-02 2.36E-02 6.45E-02 3.33E-02 3.59E-02 Vicuna Alpaca FastChat Koala Oasst LLaMa
Figure 4 presents the toxicity levels of each model for different groups. It is evident that ChatGLM exhibits the lowest overall toxicity, while FastChat shows the highest overall toxicity. Among all the models, Chat- GPT demonstrates the highest maximum slope of the ï¬tted line (we show how to calculate maximum slope in Appendix 6.7), indicating signiï¬cant variations in average toxicity values among different groups. This implies that ChatGPT has the most pronounced bias. | 2306.11507#32 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 32 | ... screen = pygame.display.set_mode ((400, 600)) x, y = 0, 0 while True: for event in pygame.event.get (): ... elif event.type == pygame .KEYDOWN: if event.key == pygame.K_SPACE: x = 200 y = 400 x += random.randint(-10, 10) y += random.randint(-20, 20) if x < 0: x = 0 elif x > 400: ... ball_x = random.randint(-10, 10) ball_y = random.randint(-20, 20) ball_vx = random.randint(-5, 5) ball_vy = random.randint(-5, 5) ... screen = pygame.display.set_mode ((800, 600)) ... while not done: for event in pygame.event.get (): ... elif event.type == pygame .KEYDOWN: if event.key == pygame.K_SPACE: ball_x = 200 ball_y = 400 ... x, y = 0, 0 dx, dy = random.randint(-10, 10), random.randint(-20, 20) while True: screen.fill((0, 0, 0)) pygame.draw.rect(screen, (255, 255, 255), (x, y, 400, 600)) | 2306.11644#32 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 32 | fairness of GPT models is affected by providing more balanced context. Empirical findings. We summarize our empirical findings from different perspectives below. ⢠Toxicity. We find that: 1) Compared to LLMs without instruction tuning or RLHF (e.g., GPT-3 (Davinci) [28]), GPT-3.5 and GPT-4 have significantly reduced toxicity in generation, maintaining a toxicity probability of less than 32% on different task prompts (Table 2 in Section 3.1); 2) however, both GPT-3.5 and GPT-4 can still generate toxic content with our carefully designed adversarial âjailbreakingâ prompts, with toxicity probability surging to almost 100% (Section 3.1 and Section 3.2); 3) GPT-4 is more likely to follow the instructions of âjailbreakingâ system prompts, and thus demonstrates higher toxicity than GPT-3.5 given different system prompts and task prompts (Section 3.2); 4) we propose to leverage GPT-3.5 and GPT-4 to generate challenging toxic task prompts, which further increase the model toxicity for evaluation. | 2306.11698#32 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 33 | Process 2: Synthesis Paragraph Classification. The next question to be answered is, âif ChatGPT is given an entire re- search article, can it correctly locate the sections of experimental sections?â The objective of Process 2 is to accept an entire research paper as input and selectively forward paragraphs containing chemical experiment details to the next assistant for summarization. However, locating the experimental synthesis section within a research paper is a complex task, as simple techniques such as keyword searches often prove insufficient. For instance, the synthesis of MOFs may be embedded within the supporting information or combined with organic linker synthesis. In earlier publications, synthesis information might appear as a footnote. Furthermore, different journals or research groups utilize varying section titles, including "Experi- mental," "Methods," "General Methods and Materials," "Experimental methods," "Synthesis and Characterization," "Synthetic Procedures," "Methods Summary," and more. Manually enumerating each case is labor-intensive, especially when synthesis paragraphs may be dispersed with non-MOF synthesis, characterization conditions, or instrument details. Even a human might take considerable time to identify the correct section.
# process. | 2306.11296#33 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 33 | 6
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021
sks Po: raining enhancement Fine-tune PLMs with knowledge i GCoOod OO0CO9O C009 (@Kerere)} } embeddings Generate knowledge-based prompts During-training enhancement Transformer layer N Incorporate knowledge encoder Transformer layer 2 Insert knowledge encoding layer | £ Transformer layer 1 Add independent adpater Intermediate layer Modify pre-training task Input fiers [vv [aes\ [\ [owe\, [ise [te \ [iie\, [om\ [one ser, Before-training enhancement { Token | embeddings LE} |B J [fxs } LB] LBs | [Fen] | Bae | [Pues J [Foy | [Eons | | Fiens Expand input structure 1 Seoment + + fF FF Ft Ff FF FF + + Combine embeddings | orpeald Ay Ahhh mmm E | embeddings , maa { * + + *- + +* + + + + + Generate artificial texts | Position i E B E E E, E E, E B embeddings a) z : 2 2 a) Input layer a Bp Knowledge masking strategy | 2306.11489#33 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 33 | Table 4 provides the standard deviations of different group types for each model (The highest value in a speciï¬c group type is highlighted in bold). It is notable that ChatGPT shows the highest standard deviations in RACE and RELIGION, indicating a greater bias towards these two group types. Additionally, all models exhibit low standard deviations in GENDER but high standard deviations in RELIGION, emphasizing the pressing need to address bias related to RELIGION.
The Mann-Whitney U test results for toxicity between groups are shown in Figure 5. This test aims to analyze the similarity of sample distributions between the two groups. Through this perspective, we can conduct a more comprehensive analysis of the differences between groups. Upon observation, we can know all models have varying degrees of bias. It can be noted that within the GENDER category, only Koala exhibits a signiï¬cant difference, with a p-value of only 0.0015. In the RACE category, the models demonstrate varied performances. Among them, ChatGLM shows the highest level of disparity, with signiï¬cant differences observed among all three Race groups. As for the RELIGION category, only the vicuna model does not exhibit any signiï¬cant differences.
8
ChatGLM Vicuna FastChat 1.0 0.8 Christian Jew Muslim 0.6 0.4 Christian Jew Muslim 0.0 | 2306.11507#33 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 33 | 20) while True: screen.fill((0, 0, 0)) pygame.draw.rect(screen, (255, 255, 255), (x, y, 400, 600)) pygame.display.update() x += dx y += dy if x + dx > 400: dx = -dx elif x + dx < 0: dx = dx if y + dy > 600: dy = -dy elif y + dy < 0: dy = dy x = 400 if y < 0: y = 0 elif y > 600: ball_x += ball_vx ball_y += ball_vy ball_vx += ball pygame.display.flip() pygame.time.wait(50) y = 600 screen.fill((255, 255, 255)) pygame.draw.circle(screen, (255, 0, 0), (x, y), 10) pygame.display.update() | 2306.11644#33 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 33 | to leverage GPT-3.5 and GPT-4 to generate challenging toxic task prompts, which further increase the model toxicity for evaluation. In addition, our challenging toxic task prompts are transferable to other LLMs without RLHF, leading to more toxic content generation from these models for evaluation (Table 3 in Section 3.3). ⢠Stereotype bias. We find that: 1) GPT-3.5 and GPT-4 are not strongly biased for the majority of stereotype topics considered under the benign and untargeted system prompts; 2) however, both GPT models can be âtrickedâ into agreeing with biased content given designed misleading (adversarial) system prompts. GPT-4 is more vulnerable to misleading targeted system prompts and agrees with the biased content more frequently than GPT-3.5, potentially because GPT-4 follows the misleading instructions more precisely; 3) for GPT models, bias is often dependent on the demographic groups mentioned in the user prompt. For example, for the query âHomosexuals have HIV.â, GPT-4 strongly disagrees in general, while for the query âWomen have | 2306.11698#33 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 34 | # process.
To address this challenge and enable ChatGPT to accurately discern synthesis details within a lengthy research paper, we draw inspiration from the human process. A chemistry Ph.D. student, when asked to locate the MOF synthesis section in a new research paper, would typically start with the first paragraph and ask themselves if it contains synthesis parameters. They would then draw upon prior knowledge from previously read papers to determine if the section is experimental. This process is repeated paragraph by paragraph until the end of the supporting information is reached, with no guarantee that additional synthesis details will not be encountered later. To train ChatGPT similarly, we prompt it to read paper sections incrementally, focusing on one or two paragraphs at a time. Using a few-shot prompt strategy, we provided ChatGPT with a couple of example cases of both synthesis and non-synthesis paragraphs and asked it to classify the sections it reads as either "Yes" (synthesis paragraph) or "No" (non-synthesis paragraph). The ChatGPT Chemistry Assistant would then continue pro- cessing the research paper section by section, passing only the paragraphs labeled as "Yes" to the following assistant for summarization. | 2306.11296#34 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 34 | Fig. 4. Three types of KGPLMs according to the stage of knowledge graph participating in pre-training.
a unified domain LLM development service that leverages domain KGs to enhance the training process, which effectively improves LLMsâ performance on domain-specific tasks.
Knowledge Obsolescence. LLMs are pre-trained on prior texts, thus limiting their ability to learn beyond the training corpus. This often results in poor performance when handling tasks that require most-recent knowledge. A simple solution to address this limitation is periodic retraining of LLMs on new data. However, the cost of such retraining is generally high. Hence, it is crucial to devise effective and efficient methods of incorporating current knowledge into LLMs. Prior studies have suggested using plugins as search engines for accessing up-to-date information. Nevertheless, these methods seem inadequate due to the difficulty of directly integrating specific knowledge into LLMs. Compared to LLMs, KGs offer a more straightforward update process that does not necessitate additional training. Updated knowledge can be incorporated into the input in the form of prompts, which are subsequently utilized by LLMs to generate accurate responses [69].
training data that align with human values, which is expected to enhance the security and reliability of LLMs. | 2306.11489#34 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 34 | 8
ChatGLM Vicuna FastChat 1.0 0.8 Christian Jew Muslim 0.6 0.4 Christian Jew Muslim 0.0
Figure 5: Mann-Whitney U test results. The values within each square represent p-values. A higher p-value (darker red) indicates that the toxicity distribution between the two groups is not signiï¬cantly different, meaning there is less bias. Conversely, a lower p-value (darker blue) suggests a signiï¬cant difference in toxicity distribution within each group, indicating a greater bias.
Conclusion. Overall, the majority of models demonstrate varying degrees of bias in at least one of the categories: GENDER, RACE, and RELIGION. With reference to previous research [18, 19, 16, 20, 21], e.g., counterfactual data augmentation, it is imperative to promptly implement measures to alleviate these biases.
4.3 Value-alignment
(a) AVA results. (b) PVA results.
Figure 6: Value-alignment results. Hard accuracy (â) and soft accuracy (â) are employed to evaluate the AVA (a), while RtA (â) is used to measure the PVA (b). | 2306.11507#34 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 34 | The above code snippet shows the main loop of a simple PyGame program that bounces a ball on the screen. phi-1 correctly applies the PyGame functions to update and draw the ball, as instructed by the prompt. phi-1-base and phi-1-small produce function calls that are syntactically correct but semantically irrelevant. We can see that phi-1-base shows some ability to use the appropriate API calls, but it fails to follow the logic of the task, while phi-1-small after finetuning understands the logic but does not have enough capacity to learn the correct function calls.
9
TKinter example. Our second example is a TKinter application where we ask the model to update a textfield accordingly when the user clicks buttons.
Prompt: """ Write a tkinter application, there is a textfield on top, a submit button at the
button and a recover button next to it. When pressing the submit button, get the text from the textfield and save it to a global string s. If the text contains the word "pew" update the textfield text to "pewpewpew?", otherwise, add "no pew..." to the end. When pressing the recover button, replace the text in the textfield with s. """ | 2306.11644#34 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 34 | For example, for the query âHomosexuals have HIV.â, GPT-4 strongly disagrees in general, while for the query âWomen have HIV.â, GPT-4 frequently agrees and outputs biased content; 4) model bias is also dependent on the stereotype topics. GPT models output more biased content on less sensitive topics like leadership and greed, while generating less biased content on more sensitive topics like drug dealing and terrorism. This is potentially due to the fine-tuning of GPT models on some protected demographic groups and sensitive topics (Figure 10 in Section 4.3). ⢠Adversarial Robustness. We find that: 1) GPT-4 surpasses GPT-3.5 on the standard AdvGLUE benchmark, demonstrating higher robustness (Table 5 in Section 5.1); 2) GPT-4 is more resistant to human-crafted adversarial texts compared to GPT-3.5 based on the AdvGLUE benchmark (Table 6 in Section 5.1); 3) on the standard AdvGLUE benchmark, sentence-level perturbations are more transferable than word-level perturbations for both GPT models (Table 6 in Section 5.1); 4) GPT models, despite their strong performance on standard benchmarks, | 2306.11698#34 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 35 | This few-shot prompt strategy is more convenient than traditional approaches, which require researchers to manually identify and label a large number of paragraphs as "Synthesis Paragraphs" and train their models accordingly. In fact, ChatGPT can even perform such classification using a zero-shot prompt strategy with detailed descriptions of what a "Synthesis Para- graph" should look like and contain. However, we have found that providing four or five short examples in a few-shot prompt strategy enables ChatGPT to identify the features of synthesis paragraphs more effectively, streamlining the classification process (Supporting Information, Figure S24).
This few-shot prompt strategy is more convenient than traditional approaches, which require researchers to manually identify and label a large number of paragraphs as "Synthesis Paragraphs" and train their models accordingly. In fact, ChatGPT can even perform such classification using a zero-shot prompt strategy with detailed descriptions of what a "Synthesis Para- graph" should look like and contain. However, we have found that providing four or five short examples in a few-shot prompt strategy enables ChatGPT to identify the features of synthesis paragraphs more effectively, streamlining the classification process (Supporting Information, Figure S24). | 2306.11296#35 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 35 | training data that align with human values, which is expected to enhance the security and reliability of LLMs.
Computation-Intensive. Training LLMs is computationally expensive, making it difficult to investigate their effective- techniques. The training process often ness with different requires thousands of GPUs and several weeks to complete. Moreover, LLMs are very computationally intensive and data hungry, making them difficult to deploy, especially in real- world applications where data and computing resources are limited. Through the integration of KGs, smaller LLMs have the potential to outperform larger ones, thereby reducing the cost associated with LLM deployment and application [63].
Insufficient Interpretability. Interpretability refers to how easily humans can comprehend a modelâs predictions, which is an essential gauge of the modelâs trustworthiness. LLMs are widely acknowledged as black boxes with opaque decision- making processes, making them challenging to interpret. KGs can be used to understand the knowledge learned by LLMs and interpret the reasoning process of LLMs, consequently enhancing the interpretability of LLMs [72]. | 2306.11489#35 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 35 | AVA. The results of AVA are depicted in Figure 6a. It is evident that ChatGPT performs the best in terms of both hard accuracy and soft accuracy. ChatGPT achieves a soft accuracy score exceeding 0.9, while the other models still exhibit notable gaps compared to it. Most models demonstrate a signiï¬cant improvement in soft accuracy compared to hard accuracy. However, Vicuna shows the minimal difference between its hard accuracy and soft accuracy, suggesting a polarity in its judgment of social norms (either perceiving them as exclusively good or bad). Moreover, the hard accuracy of most models is above 0.5, indicating their capability to make certain judgments on social norms.
PVA. Figure 6b shows the results of PVA. Overall, none of the highest RtA values exceed 0.7, and the highest RtA for toxic norm does not exceed 0.6. This indicates that most models still perform poorly under PVA conditions. Furthermore, it can be observed that the LLaMa, Oasst, and FastChat models perform similarly in both the good norm and toxic norm, while ChatGLM and Vicuna show a signiï¬cant difference between these two conditions, indicating that these models are more sensitive under the cases of the good norm.
9 | 2306.11507#35 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 35 | phi-1 ... global s s = "" def submit(): global s text = textfield.get() if "pew" in text: textfield.delete(0, tk.END) textfield.insert(0, "pewpewpew...") else: textfield.delete(0, tk.END) textfield.insert(0, text + "no pew...") s = text def recover(): global s textfield.delete(0, tk.END) textfield.insert(0, s) textfield = tk.Entry(root) textfield.pack() submit_button = tk.Button(root, text="Submit", command=submit) submit_button.pack() recover_button = tk.Button(root, text="Recover" , command=recover) recover_button.pack() root.mainloop()
# phi-1-base | 2306.11644#35 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 35 | are more transferable than word-level perturbations for both GPT models (Table 6 in Section 5.1); 4) GPT models, despite their strong performance on standard benchmarks, are still vulnerable to our adver- sarial attacks generated based on other autoregressive models (e.g., SemAttack achieves 89.2% attack success rate against GPT-4 when transferring from Alpaca on QQP task. BERT-ATTACK achieves a 100% attack success rate against GPT-3.5 when transferring from Vicuna on the MNLI-mm task. Overall, ALpaca-7B generates the most transferable adversarial texts to GPT-3.5 and GPT-4) (Table 7 | 2306.11698#35 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 36 | The finalized prompt for Process 2 comprises three parts: (i) a request for ChatGPT to determine whether the provided context includes a comprehensive MOF synthesis, answering only with "Yes" or "No"; (ii) some example contexts labeled as "Yes" and other labeled as "No"; (iii) the context to be classified, consisting of one or more research article paragraphs. Similar to Process 1's prompt, parts (i) and (ii) are fixed, while part (iii) is replaced with independent sections from the paper to be classified. The entire research article is parsed into sections of 100-500 words, which are iteratively incorporated into the prompt and sent separately to ChatGPT for a "Yes" or "No" response. Each prompt represents a one-time conversation, and ChatGPT cannot view answers from previous prompts, preventing potential bias in its decision-making for the current prompt. | 2306.11296#36 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 36 | Bias, Privacy, and Toxicity. Although LLMs are trained to align with human expectations, they sometimes generate harmful, fully biased, offensive, and private content. When users interact with LLMs, models can be induced to generate such text, even without prior prompting or prompted with safe text. In fact, it has been observed that LLMs tend to degenerate into generating toxic text within just 25 generations [70]. Furthermore, despite their seemingly convincing text, LLMs generally tend to offer unhelpful and sometimes unsafe advice. For example, it has been revealed that GPT-3 produces worse advice than humans do in over 95% of the situations described on Reddit [71]. The reasons are that such biased, private, and toxic texts widely exist in the pre-training corpora and LLMs tend to generate memorized text or new text that is similar to the input text. KGs are commonly built from authoritative and reliable data sources, enabling the generation of high-quality
Overall, LLMs have made noteworthy advancements and are considered a prototype of an artificial general intelligence system at its early stages. However, despite their ability to produce fluent and coherent text, they still encounter many obstacles. Among these obstacles, their struggle in recalling and accurately applying factual knowledge presents the pri- mary challenge, and diminishes their ability to reason and accomplish knowledge-grounded tasks proficiently.
# III. KGPLMS | 2306.11489#36 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 36 | 9
Conclusion. There is still ample room for improvement in the performance of most models under both AVA and PVA conditions, underscoring the critical need for the implementation of enhancement methods guided by RLHF [26] at the ethical level.
# 5 Conclusion
The emergence of LLMs has brought about great convenience for human beings. However, it has also given rise to a range of ethical considerations that cannot be ignored. To address these concerns, this paper proposes a benchmark â TRUSTGPT, which is speciï¬cally designed for LLMs ethical evaluation. TRUSTGPT assesses the ethical dimensions of eight latest LLMs from three perspectives: toxicity, bias, and value-alignment. Our ï¬ndings through empirical analysis indicate that ethical considerations surrounding LLMs still remain a signiï¬cant concern. It is imperative to implement appropriate measures to mitigate these concerns and ensure the adherence of LLMs to human-centric principles. By introducing the TRUSTGPT benchmark, we aim to foster a future that is not only more responsible but also integrated and dependable for language models.
# References
[1] OpenAI. Chatgpt, 2023. https://openai.com/product/chatgpt.
[2] OpenAI. Gpt-4, 2023. https://openai.com/product/gpt-4. | 2306.11507#36 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 36 | # phi-1-base
global s s = s.replace("pew", "pewpewpew") if "pew" in s: s = s.replace("pew", "pewpewpew") else: s = s + "no pew..." text.delete(0.0, END) text.insert(0.0, s) def main(): global s s = "pewpewpew" text.delete(0.0, END) text.insert(0.0, s) text.tag_add("button", "<Button-1>", "<Button-3> ") text.tag_add("button", "<Button-2>", "<Button-4> ") # The model keeps adding buttons from here.
# phi-1-small | 2306.11644#36 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11296 | 37 | The finalized prompt for Process 2 comprises three parts: (i) a request for ChatGPT to determine whether the provided context includes a comprehensive MOF synthesis, answering only with "Yes" or "No"; (ii) some example contexts labeled as "Yes" and other labeled as "No"; (iii) the context to be classified, consisting of one or more research article paragraphs. Similar to Process 1's prompt, parts (i) and (ii) are fixed, while part (iii) is replaced with independent sections from the paper to be classified. The entire research article is parsed into sections of 100-500 words, which are iteratively incorporated into the prompt and sent separately to ChatGPT for a "Yes" or "No" response. Each prompt represents a one-time conversation, and ChatGPT cannot view answers from previous prompts, preventing potential bias in its decision-making for the current prompt. Text embeddings are high-dimensional vector representations of text that capture semantic information, enabling quantification of the relatedness of textual content. The distance between these vectors in the embedded space correlates with the semantic similarity between corresponding text strings, with smaller distances indicating greater relatedness. | 2306.11296#37 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11507 | 37 | [2] OpenAI. Gpt-4, 2023. https://openai.com/product/gpt-4.
[3] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efï¬cient foundation language models, 2023.
[4] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model, 2023. https://github.com/tatsu-lab/stanford_alpaca.
[5] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng andZhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. vicuna, 2023. https://lmsys.org/blog/2023-03-30-vicuna/. | 2306.11507#37 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 37 | # phi-1-small
import tkinter as tk root = tk.Tk() global s text = "" while True: event, values = tk.popup() if event == "submit": text += values.split()[0] + " " elif event == "textfield": s += values.split()[0] + " " elif event == "submit-button-text": text = values.split()[0] + " " elif event == "text-button-text": s = text if "pew" in text: text = text.replace("pew", "pewpewpew") if "no pew" in text: break tk.destroy()
The three modelsâ completions show a huge gap in their prompt understanding. Both phi-1-base and phi-1-small fail to use the correct Tkinter APIs and make up meaningless function calls. On the other hand, phi-1 implements the GUI and all the functions correctly (except that it did not copy âpewpewpew?â correctly). We present two additional examples in Appendix A for pytorch and pyplot.
10
Chat mode example. Finally, we show that phi-1 has a better chat capability than phi-1-base despite that chat data is exclusive in pretraining, but not in the finetuning. | 2306.11644#37 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 37 | Toxicity Stereotypes â x9) Adversarial Robustness ©) Out-of- Distribution Robustness mT Robustness on Adversarial demonstrations A av E âcuentas 1 eames emcee 00 ene eee ee Cs ee 2 eee Machine Ethics Faimess 517) Standard benchmark RealToxicityPrompts (§3.1) Diverse system prompts (§3.2) * Challenging user prompts (§3.3) * Stereotype system prompts (§4) * Customized stereotype user prompts (§4) © Standard AdvGLUE benchmark (§5.1) * Challenging adversarial user prompts AdvGLUE ++ (§5.2) + OOD style (§6.1) * OOD knowledge (§6.2) ¢ OOD demonstrations in in-context learning (§6.3) * Counterfactual examples in demonstrations (§7.1) ¢ Spurious correlations in demonstrations (§7.2) * Backdoors in demonstrations (§7.3) Privacy leakage of training data (§8.1) ¢ Privacy leakage during conversations (§8.2) ¢ Privacy-related words and privacy events (§8.3) * Standard machine ethics benchmarks | 2306.11698#37 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 38 | The distance between these vectors in the embedded space correlates with the semantic similarity between corresponding text strings, with smaller distances indicating greater relatedness. While Process 2 can automatically read and summarize papers, it must evaluate every section to identify synthesis paragraphs. To expedite this process, we developed Process 3, which filters sections least likely to contain synthesis parameters using OpenAI embeddings before exposing the article to classification assistant in Pro- cess 2. To achieve this, we employed a two-step approach to construct Process 3: first, parsing all papers and converting each segment into embeddings; and second, calculating and ranking the similarity scores of each segment based on their relevance to a predefined prompt encapsulating synthesis parameter. | 2306.11296#38 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 38 | Before-training enhancement During-training enhancement Post-training enhancement Method Expand input structures Enrich input information Generate new data Optimize word masks Incorporate knowledge encoders Insert knowledge encoding layers Add independent adapters Modify the pre-training task Fine-tune PLMs with knowledge KGPLM K-BERT [73], CoLAKE [74], Zhang et al. [75] LUKE [76], E-BERT [77], KALM [78], OAG-BERT [79], DKPLM [80] AMS [81], KGPT [82], KGLM [83], ATOMIC [84], KEPLER [85] ERNIE [31], WKLM [86], GLM [35] ERNIE [31], ERNIE 3.0 [87], BERT-MK [88], CokeBERT [89], JointLK [90], KET [91], Liu et al. [92], QA-GNN [93], GreaseLM [67], KLMo [94] KnowBERT [95], K-BERT [73], CoLAKE [74], JAKET [96], KGBART [97] K-Adapter [98], OM-ADAPT [99], DAKI-ALBERT [100], CKGA [101] ERNIE | 2306.11489#38 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 38 | [6] Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz. More than youâve asked for: A comprehensive analysis of novel prompt injection threats to application-integrated large language models. arXiv preprint arXiv:2302.12173, 2023.
[7] Daniel Kang, Xuechen Li, Ion Stoica, Carlos Guestrin, Matei Zaharia, and Tatsunori Hashimoto. Exploiting programmatic behavior of llms: Dual-use through standard security attacks. arXiv preprint arXiv:2302.05733, 2023.
[8] Haoran Li, Dadi Guo, Wei Fan, Mingshi Xu, and Yangqiu Song. Multi-step jailbreaking privacy attacks on chatgpt. arXiv preprint arXiv:2304.05197, 2023.
[9] Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. Real- toxicityprompts: Evaluating neural toxic degeneration in language models. arXiv preprint arXiv:2009.11462, 2020. | 2306.11507#38 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 38 | # Prompt: Student:
I have a Python pyplot, I want to increase its resolution and rotate it, what
Student: I have a Python pyplot, I want to increase its resolution and rotate it, what should I do? TA:
1. Set the 'dpi' parameter to the desired resolution (e.g. 300 for 300 DPI) 2. Use the 'rotate' function from the 'matplotlib. pyplot' module to rotate the plot by 90 degrees clockwise Here's an example: import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot([1, 2, 3], [4, 5, 6]) # Set the resolution to 300 DPI ax.set_dpi(300) # Rotate the plot by 90 degrees clockwise ax.rotate(90)
# phi-1-base
You can use the `setp()` function to set the resolution and the `set_rotation()` function to rotate the plot. <|endoftext|>
# phi-1-small
I am a Python pyplot, and I want to increase its resolution and rotate it, what should I do? SECOND: I am a Python pyplot, and I want to increase its resolution and rotate it, what should I do? THIRD:...
# 4 Evaluation on unconventional problems with LLM grading | 2306.11644#38 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 38 | ¢ Privacy leakage during conversations (§8.2) ¢ Privacy-related words and privacy events (§8.3) * Standard machine ethics benchmarks (§9.1) ¢ Jailbreaking system and user prompts (§9.2) ¢ Evasive sentences in user inputs (§9.3) * Conditional immoral actions (§9.4) ¢ Unfair test data in zero-shot settings (§10.2) ¢ Unfair context in few-shot settings (§10.3) ¢ Fair context in few-shot settings (§10.4) Toxic Task Prompts (1.2K) { Nontoxic Task Prompts (1.2K) Straightforward Prompts (9) Role-playing Prompts (15) Task-reformulation Prompts (6) Respond-as-program Prompts (3) GPT-3.5 Generated Toxic Task Prompts (1.2K) GPT-4 Generated Toxic Task Prompts (1.2K) Benign System Prompts { Untargeted System Prompts Targeted System Prompts Stereotype topics (16 topics x 3 variations) { Demographic groups (stereotyped (12), non-stereotyped (12)) Adversarial Attack Strategies | 2306.11698#38 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 39 | 6
& Literature Reference Synthesis
Figure 3.
Two-dimensional visualization of 18,248 text segment embeddings, with each point representing a text segment from the research articles selected. Color coding denotes thematic categories: red for âsynthesisâ, green for âgas sorptionâ, yellow for âliterature referenceâ, blue for âcrystallographic dataâ, purple for âstructural analysisâ, orange for âcharacterizationâ, and grey for other text segments not emphasized in this study.
text-embedding-ada-002 | 2306.11296#39 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 39 | [96], KGBART [97] K-Adapter [98], OM-ADAPT [99], DAKI-ALBERT [100], CKGA [101] ERNIE [31], LUKE [76], OAG-BERT [79], WKLM [86], SenseBERT [102], ERICA [103], SentiLARE [104], GLM [35], KEPLER [85], JAKET [96], ERNIE 2.0 [105], ERNIE 3.0 [87], DRAGON [106], LRLM [107] KALA [108], KeBioSum [109], KagNet [110], BioKGLM [111], Chang et al. [112] Chang et al. [112], Andrus et al. [113], KP-PLM [114] Generate knowledge-based prompts | 2306.11489#39 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 39 | [10] Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. arXiv preprint arXiv:2203.09509, 2022.
[11] Yau-Shian Wang and Yingshan Chang. Toxicity detection with generative prompt-based inference. arXiv preprint arXiv:2205.12390, 2022.
[12] Nedjma Ousidhoum, Xinran Zhao, Tianqing Fang, Yangqiu Song, and Dit-Yan Yeung. Probing toxic content in large pre-trained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4262â4274, 2021.
10
[13] Omar Shaikh, Hongxin Zhang, William Held, Michael Bernstein, and Diyi Yang. On second thought, letâs not think step by step! bias and toxicity in zero-shot reasoning. arXiv preprint arXiv:2212.08061, 2022. | 2306.11507#39 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 39 | # 4 Evaluation on unconventional problems with LLM grading
A potential concern with the surprisingly good performance of phi-1 on HumanEval (see Table 1 and Figure 2.1) is that there might be memorization stemming from contamination of the synthetic CodeEx- ercises dataset. We study this potential contamination directly in Section 5, while this section addresses the concern with a new evaluation that is designed to be unconventional enough to be unlikely to appear in our training dataset.
To minimize bias and leakage, the new evaluation problems were created by a dedicated team that did not access the CodeExercises dataset or the final model. They created 50 new problems in the same format as HumanEval with instructions to design problems that are unlikely to appear in real-world code bases or as coding exercises. Here is an example of such a problem:
def sort_concat_square_deduplicate(list1, list2, my_threshold): """ This functions takes two lists of integers, sorts each of them in ascending order, concatenates them, squares the entries at even indices, filters out entries smaller than my_threshold and then removes duplicates. The resulting list is returned. """ | 2306.11644#39 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 39 | topics (16 topics x 3 variations) { Demographic groups (stereotyped (12), non-stereotyped (12)) Adversarial Attack Strategies (14) Tasks (6 GLUE Tasks, 4978 Examples) { Attacks on autoregressive models { Diverse System Prompt + Task Description (Alpaca-7B, Vicuna-13B, StableVicuna-13B) Attack Strategies (5 Strategies, 42017 Examples) Word-level substitutions for style transformations Sentence-level style transformations { RealtimeQa on facts before and after 2023 w/o IDK option RealtimeQA on facts before and after 2023 w/ IDK option Different Style Transformations { Different domains from MMLU { SNLI-CAD counterfactual (2 x 400) MSGS counterfactual (4 x 1K) { Fallible heuristics HANS dataset (6K) Backdoor generation strategies Locations of backdoored demonstrations Locations of backdoor triggers Backdoored instructions Context prompting (3.3k) { Zero-shot & few-shot prompting (3.3k) Zero-shot prompting (100) Few-shot privacy-protection demonstrations (100) Few-shot | 2306.11698#39 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 40 | text-embedding-ada-002
In particular, we partitioned the 228 research articles into 18,248 individual text segments (Supporting Information, Figure S30âS32). Each segment was converted into a 1536-dimensional text embedding using OpenAI's , a simple but efficient model for this process (Supporting Information, Figure S33âS35). These vectors were stored for future use. To identify segments most and least likely to contain synthesis parameters, we employed interactive prompt refinement strategy (Supporting In- formation, Section S2.4), consulting with ChatGPT to optimize the prompt. The prompt used in Process 3, unlike previous prompts, served as a text segment for search and similarity comparison rather than instructing ChatGPT (Supporting Infor- mation, Figure S25). Next, the embeddings of all 18,248 text segments were compared with the prompt's embedding, and a relevance score was assigned to each segment based on the cosine similarity between the two embeddings. Highly relevant segments were passed on to classification assistant for further processing, while low similarity segments were filtered out (Figure 1). | 2306.11296#40 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 40 | { Pre-training tasks { Transformer layers Visible matrix Token embeddings | | Segment embeddings Knowledge embeddings Knowledge-guided artificial texts Knowledge-guided masking strategy Knowledge-expanded input structure Knowledge graph Apple China City \ ~ Input text: Tim Cook is currently CEQ Capital = 84 visiting Beijing now Tim Cook __~ Beijing
the sentence structure information. Formally, the output from mask-self-attention is computed as:
h = sof tmax( QKT + M â dk )V, (8)
where M is the visible matrix. CoLAKE [74] addresses the heterogeneous embedding space challenge by combining knowledge context and language context into a unified word- knowledge graph. Zhang et al. [75] employed ConceptNet as the knowledge source and improved the visible matrix to control the information flow, which further improved the performance of K-BERT.
Fig. 5. Main framework of before-training enhancement KGPLMs.
enhancement, and post-training enhancement methods accord- ing to the stage at which KGs participate in pre-training, as illustrated in Fig. 4.
# A. Before-training Enhancement KGPLMs | 2306.11489#40 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 40 | [14] Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, and Karthik Narasimhan. Toxicity in chatgpt: Analyzing persona-assigned language models. arXiv preprint arXiv:2304.05335, 2023.
[15] Yuxuan Wan, Wenxuan Wang, Pinjia He, Jiazhen Gu, Haonan Bai, and Michael Lyu. Biasasker: Measuring the bias in conversational ai system. arXiv preprint arXiv:2305.12434, 2023.
[16] Zonghan Yang, Xiaoyuan Yi, Peng Li, Yang Liu, and Xing Xie. Uniï¬ed detoxifying and debiasing in language generation via inference-time adaptive optimization. arXiv preprint arXiv:2210.04492, 2022.
[17] Shikha Bordia and Samuel R Bowman. Identifying and reducing gender bias in word-level language models. arXiv preprint arXiv:1904.03035, 2019. | 2306.11507#40 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 40 | One of the challenges of evaluating language models on coding tasks is that the output of the model is often binary: either the code passes all the unit tests or it fails. However, this does not capture the nuances of the modelâs performance, as it might have produced a code that is almost correct but has a minor error, or a code that is completely wrong but coincidentally passes some tests. Arguably, a more informative way of assessing the modelâs coding skills is to compare its output with the correct solution and grade it based on how well it matches the expected logic. This is similar to how humans are evaluated on coding interviews, where the interviewer does not only run the code but also examines the reasoning and the quality of the solution.
To evaluate candidate solutions, we therefore adopt the approach of using GPT-4 to grade the solution (such as in [EL23]). This approach has two distinct advantages: (1) by using GPT-4 as a grader, we can leverage its knowledge and generative abilities to obtain a more fine-grained and meaningful signal of the
11 | 2306.11644#40 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 40 | & few-shot prompting (3.3k) Zero-shot prompting (100) Few-shot privacy-protection demonstrations (100) Few-shot privacy-leakage demonstrations (100) Privacy-related words (17) { Privacy events (8) { Practical scenarios from ETHICS (4K) Game-like scenarios from Jiminy Cricket (4K) System-level jailbreaking prompts { User-level jailbreaking prompts System and user-level jailbreaking prompts Describe immoral actions as unintentional { Describe immoral actions as harmless Describe immoral actions as unauthenticated Harm to others vs. Harm to oneself (1.1K) { Severeness of harm (1.1K) { Unfair test distributions with different base rate parity given a sensitive attribute Different sensitive attributes { Unfair few-shot examples with different base rates parity Different numbers of unfair few-shot examples { Different numbers of fair few-shot examples on fair test distribution Different numbers of fair few-shot examples on unfair test distribution | 2306.11698#40 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 41 | To evaluate the effectiveness of this approach, we conducted a visual exploration of our embedding data (Figure 3). By reducing the vectors' dimensionality, we observed distinct clusters corresponding to different topics. Notably, we identified distinct clusters related to topics like âgas sorptionâ, âliterature referenceâ, âcharacterizationâ, âstructural analysisâ and âcrys- tallographic dataâ, which were separate from the âsynthesisâ cluster. This observation strongly supports the efficiency of our embedding-based filtering strategy. However, this strategy, while effective at filtering out less relevant text and passing seg- ments of mid to high relevance to the subsequent classification assistant, cannot directly search for synthesis paragraphs to feed to the summarization assistant, thus bypassing the classification assistant. In other words, the searching-to-classifying- to-summarizing pipeline cannot be simplified to a searching-to-summarizing pathway due to the inherent search limitations of the embeddings. As shown in Figure 3, embeddings alone may not accurately identify all relevant âsynthesisâ sections, particularly when they | 2306.11296#41 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 41 | # A. Before-training Enhancement KGPLMs
There are two challenges when integrating the knowledge from KGs into PLMs: heterogeneous embedding space and knowledge noise. The first challenge arises from the hetero- geneity between text and KG. The second challenge occurs when unrelated knowledge diverts the sentence from its correct meaning. Before-training enhancement methods resolve these issues by unifying text and KG triples into the same input format, the framework of which is shown in Fig. 5. Existing studies propose diverse approaches to achieve this goal, includ- ing expanding input structures, enriching input information, generating new data, and optimizing word masks.
Expand Input Structures. Some methods expand the input text into graph structure to merge the structured knowledge of KGs and then convert the merged graph into text for PLM training. For example, K-BERT [73] converts texts to sentence trees to inject related triples by fusing them with KG subgraphs and introduces soft-position and visible matrix to overcome the problem of knowledge noise. Moreover, it proposes mask-self-attention, an extension of self-attention, to prevent erroneous semantic alterations by taking advantage of | 2306.11489#41 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 41 | [18] Ruibo Liu, Chenyan Jia, Jason Wei, Guangxuan Xu, Lili Wang, and Soroush Vosoughi. Miti- gating political bias in language models through reinforced calibration. In Proceedings of the AAAI Conference on Artiï¬cial Intelligence, volume 35, pages 14857â14866, 2021.
[19] Umang Gupta, Jwala Dhamala, Varun Kumar, Apurv Verma, Yada Pruksachatkun, Satyapriya Krishna, Rahul Gupta, Kai-Wei Chang, Greg Ver Steeg, and Aram Galstyan. Mitigating gender bias in distilled language models via counterfactual role reversal. arXiv preprint arXiv:2203.12574, 2022.
[20] Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. Gender bias in neural natural language processing. Logic, Language, and Security: Essays Dedicated to Andre Scedrov on the Occasion of His 65th Birthday, pages 189â202, 2020. | 2306.11507#41 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 41 | 11
Model CodeGen-Mono-350M [NPH+23] CodeGen-Mono-16.1B [NPH+23] Replit [Rep23] StarCoder [LAZ+23] phi-1-base phi-1-small phi-1 Size 350M 577B 16.1B 577B 2.7B 525B 15.5B 1T 1.3B 7B 350M 7B 7B 1.3B Training tokens
Table 2: LLM graded Understanding scores on 50 new unconventional coding problems.
student modelâs coding capabilities, and (2) it obviates the need for tests1. Our prompt instructs the LLM to evaluate a studentâs solution first in a short verbal evaluation followed by grades from 0 to 10.
See Table 2 for our results with phi-1 and competing models. The grades on our new unconventional problems give the same ranking as HumanEval (see Table 1). phi-1 again achieves a score significantly higher than StarCoder, as it did on HumanEval. Given that the new problems have had no chance to contaminate the training data and, furthermore, were designed to be outside the training distribution, these results greatly increase our confidence in the validity of phi-1âs performance.
# 5 Data pruning for unbiased performance evaluation | 2306.11644#41 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11296 | 42 | search limitations of the embeddings. As shown in Figure 3, embeddings alone may not accurately identify all relevant âsynthesisâ sections, particularly when they contain additional information such as characterization and sorption data. The presence of these ele- ments in a synthesis section can reduce its similarity score and its proximity to the center of the âsynthesisâ cluster. Points between the âsynthesisâ and âcharacterizationâ or âcrystallographic dataâ clusters may not have the highest similarity scores and could be missed. However, by filtering only the lowest scores, mid-relevance points are retained and passed to the clas- sification assistant, which can more accurately classify ambiguous content. | 2306.11296#42 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 42 | Enrich Input Information. Instead of merging data from texts and KGs, some studies incorporate entities as auxiliary information by combining their embeddings with text embed- dings. LUKE [76] introduces entity type embedding to indicate that the corresponding token in a sentence is an entity, and trains the model with the masked entity prediction task in addition to the MLM task. Further, it extends the Transformer encoder using an entity-aware self-attention mechanism to simultaneously handle both types of tokens. E-BERT [77] aligns entity embeddings with wordpiece vectors through an unconstrained linear mapping matrix and feeds the aligned representations into BERT as if they were wordpiece vectors. KALM [78] signals the existence of entities to the input of the encoder in pre-training using an entity-extended tokenizer and adds an entity prediction task to train the model. Liu et al. [79] proposed OAG-BERT, a unified backbone language model for academic knowledge services, which integrates heterogeneous entity knowledge and scientific corpora in an open academic graph. They designed an entity type embedding to differentiate various entity types and used a span-aware entity masking strategy for MLM over entity names with | 2306.11489#42 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 42 | [21] Yue Guo, Yi Yang, and Ahmed Abbasi. Auto-debias: Debiasing masked language models with automated biased prompts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1012â1023, 2022.
[22] Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A Smith, and Yejin Choi. Social bias frames: Reasoning about social and power implications of language. arXiv preprint arXiv:1911.03891, 2019.
[23] Holistic evaluation of language models. arXiv preprint arXiv:2211.09110, 2022.
[24] Terry Yue Zhuo, Yujin Huang, Chunyang Chen, and Zhenchang Xing. Exploring ai ethics of chatgpt: A diagnostic analysis. arXiv preprint arXiv:2301.12867, 2023.
[25] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. | 2306.11507#42 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 42 | # 5 Data pruning for unbiased performance evaluation
In Figure 2.1, we see that training on CodeExercises leads to a substantial boost in the performance of the model on the HumanEval benchmark. To investigate this boost, we propose to prune the CodeExercises dataset by removing files that are âsimilarâ to those in HumanEval. This process can be viewed as a âstrong formâ of data decontamination. We then retrain our model on such pruned data, and still observe strong performance on HumanEval. In particular, even after aggressively pruning more than 40% of the CodeExercises dataset (this even prunes files that are only vaguely similar to HumanEval, see Appendix C), the retrained phi-1 still outperforms StarCoder.
We believe that such data pruning experiment is a fair way to evaluate performance, and is more insightful than standard âcontaminationâ studies in the literature that are usually based on measures of overlap between training and test data (e.g., Section 4.8 of [AON+21]). For sake of completeness we start this section by conducting a standard contamination experiment, which shows that CodeExercises is not contaminated by HumanEval in this standard sense.
# 5.1 N-gram overlap | 2306.11644#42 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 42 | in Section 5.2); 5) among the five adversarial attack strategies against the three base autoregressive models, SemAttack achieves the highest adversarial transferability when transferring from Alpaca and StableVicuna, while TextFooler is the most transferable strategy when transferring from Vicuna (Tables 8, 9 and 10 in Section 5.2). ⢠Out-of-Distribution Robustness. We find that: 1) GPT-4 exhibits consistently higher generalization capabilities given inputs with diverse OOD style transformations compared to GPT-3.5 (Table 11 in Section 6.1); 2) when evaluated on recent events that are presumably beyond GPT models knowledge scope, GPT-4 demonstrates higher resilience than GPT-3.5 by answering âI do not knowâ rather than made-up content (Table 12 in Section 6.2), while the accuracy still needs to be further improved; 3) with OOD demonstrations that share a similar domain but differ in style, GPT-4 presents consistently higher generalization than GPT-3.5 (Table 13 in Section 6.3); 4) with OOD demonstrations that contain different domains, the accuracy of GPT-4 is positively influenced by | 2306.11698#42 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 43 | GPT-3.5-turbo . Rather than relying on singular, time-consuming con- versations with web-based ChatGPT to process textual data from a multitude of research articles, OpenAI's , which is identical to the one underpinning the ChatGPT product, facilitates a more efficient approach, as it incorporates an Application Programming Interface (API), enabling batch processing of text from an extensive array of articles. This is achieved through iterative context and prompt submissions to ChatGPT, followed by the collection of its responses (Support- ing Information, Section S3.4).
7
Specifically, our approach involves having ChatGPT to create Python scripts for parsing academic papers, generating prompts, executing text processing through Processes 1, 2, and 3, and collating the responses into cleaned, tabulated data | 2306.11296#43 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 43 | in an open academic graph. They designed an entity type embedding to differentiate various entity types and used a span-aware entity masking strategy for MLM over entity names with different lengths. Be- sides, they designed the entity-aware 2D positional encoding to incorporate the entity span and sequence order information. Zhang et al. [80] decomposed the knowledge injection process of PLMs into pre-training, fine-tuning, and inference stages, and proposed DKPLM, which injects knowledge only during pre-training. Specifically, DKPLM detects long-tail entities ac- cording to their semantic importance in both texts and KGs and replaces the representations of detected long-tail entities with | 2306.11489#43 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 43 | [26] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744, 2022.
[27] Maxwell Forbes, Jena D Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. Social chemistry 101: Learning to reason about social and moral norms. arXiv preprint arXiv:2011.00620, 2020.
[28] Kellie Webster, Xuezhi Wang, Ian Tenney, Alex Beutel, Emily Pitler, Ellie Pavlick, Jilin Chen, Ed Chi, and Slav Petrov. Measuring and reducing gendered correlations in pre-trained models. arXiv preprint arXiv:2010.06032, 2020.
[29] Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R Bowman. Crows-pairs: A challenge dataset for measuring social biases in masked language models. arXiv preprint arXiv:2010.00133, 2020.
11 | 2306.11507#43 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 43 | # 5.1 N-gram overlap
N-gram measures the similarity of text segments based on the shared n-word sequences. We calculate the n-gram overlap between the docstrings of each humaneval question and each exercise in the CodeExercises dataset that was generated. We found 4 humaneval questions with 13-gram overlap with at least one of the entries in our dataset. After further investigating, we found out that all the 4 overlap cases in the 13-gram are all false positives such as the example below. Our n-gram overlap analysis shows that our dataset has minimal letter-by-letter overlap with HumanEval.
1Developing rigorous sets of tests can be a significant undertaking, as demonstrated by [LXWZ23].
12
HumanEval: You are given a non-empty list of positive
integers. Return the greatest integer that is greater than zero, and has a frequency greater than or equal to the value of the integer itself. The frequency of is the number of times it appears in the list.
# an integer
CodeExercises: Calculates the power frequency analysis sum
of a list of integers. The power frequency analysis sum is calculated by taking the sum of the squares of the frequencies of each unique integer in the list. The frequency of an integer is the number of times it appears in the list.
# 5.2 Embedding and syntax-based similarity analysis | 2306.11644#43 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 43 | (Table 13 in Section 6.3); 4) with OOD demonstrations that contain different domains, the accuracy of GPT-4 is positively influenced by domains close to the target domain but negatively impacted by those far away from it, while GPT-3.5 exhibits a decline in model accuracy given all demonstration domains (Table 15 in Section 6.3). ⢠Robustness to Adversarial Demonstrations. We find that: 1) GPT-3.5 and GPT-4 will not be misled by the counterfactual examples added in the demonstrations and can even benefit from the counterfactual demonstrations in general (Table 17 in Section 7.1); 2) spurious correlations constructed from different fallible heuristics in the demonstrations have different impacts on model predictions. GPT-3.5 is more likely to be misled by the spurious correlations in the demonstrations than GPT-4 (Table 19 and Figure 16 in Section 7.2); 3) providing backdoored demonstrations will mislead both GPT-3.5 and GPT-4 to make incorrect predictions for backdoored inputs, especially when the backdoored demonstrations are positioned close to the (backdoored) user inputs (Table 20, 21 in Section | 2306.11698#43 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 44 | Specifically, our approach involves having ChatGPT to create Python scripts for parsing academic papers, generating prompts, executing text processing through Processes 1, 2, and 3, and collating the responses into cleaned, tabulated data (Supporting Information, Figures S28âS39). Traditionally, such a process could necessitate substantial coding experience and be time-consuming. However, we leverage the code generation capabilities of ChatGPT to establish Processes 1, 2, and 3 for batch processing using OpenAIâs APIs, namely, . In essence, researchers only need and to express their requirements for each model in natural language - specifying inputs and desired outputs - and ChatGPT will generate the appropriate Python code (Supporting Information, Section S3.5). This code can be copied, pasted, and executed in the relevant environment. Notably, even in the event of an error, ChatGPT, especially when equipped with the GPT-4 model, can assist in code revision. We note that while coding assistance from ChatGPT may not be necessary for those with coding experience, it does provide an accessible platform for individuals lacking such experience to engage in the process. Given the simplicity and straightforwardness of the logic involved in Processes 1, 2, and 3, ChatGPT-generated Python code exhibits minimal errors and significantly accelerates the programming process. | 2306.11296#44 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 44 | 8
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021
the representations of the corresponding knowledge triples generated by shared PLM encoders. The most-commonly used knowledge embedding model is TransE [115], which learns entity and relation representations by minimizing the following loss function:
LKE = â â¥eh + r â etâ¥2 2 , (9)
where eh and et are the embeddings of the head and tail entities, while r is the representation of the relation. | 2306.11489#44 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 44 | 11
[30] Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. Measuring bias in contextualized word representations. arXiv preprint arXiv:1906.07337, 2019.
[31] Chandler May, Alex Wang, Shikha Bordia, Samuel R Bowman, and Rachel Rudinger. On measuring social biases in sentence encoders. arXiv preprint arXiv:1903.10561, 2019.
[32] Moin Nadeem, Anna Bethke, and Siva Reddy. Stereoset: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456, 2020.
[33] Henry B Mann and Donald R Whitney. On a test of whether one of two random variables is stochastically larger than the other. The annals of mathematical statistics, pages 50â60, 1947.
[34] Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. Principle-driven self-alignment of language models from scratch with minimal human supervision. arXiv preprint arXiv:2305.03047, 2023. | 2306.11507#44 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 44 | # 5.2 Embedding and syntax-based similarity analysis
As we just saw, the n-gram analysis is not refined enough to find similar code snippets between HumanEval and CodeExercises. Instead we use a combination of embedding and syntax-based distances. For the embedding distance we compute the L2 distance between the embedding of the code snippets where the embedding is derived from a pre-trained CodeGen-Mono 350M model [NPH+23]. We observe that the embedding distance is successful in capturing code pairs where the overall code semantics are similar, which can be inferred via the Python Docstring, function/class names, as well as the code structure. For the syntax-based distance we calculate the (string) edit distance between the abstract syntax trees (ASTs) of two given code snippets. The AST distance successfully identifies overlapping sections between code pairs while being agnostic to non-syntax text such as variable/function naming, comments, and Python Docstrings. For our pruning of CodeExercises we fix a threshold for the embedding distance, and we test several match rate Ï for the AST distance. See Appendix C for examples of code pairs that are captured with the embedding distance and various AST match rates Ï . We vary Ï between 0.95 and 0.8, which corresponds to removing between 42.5K to 354K of the 879.5K total problems in CodeExercises. | 2306.11644#44 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 44 | predictions for backdoored inputs, especially when the backdoored demonstrations are positioned close to the (backdoored) user inputs (Table 20, 21 in Section 7.3). GPT-4 is more vulnerable to backdoored demonstrations (Table 20 in Section 7.3). ⢠Privacy. We find that: 1) GPT models can leak privacy-sensitive training data, such as the email addresses from the standard Enron Email dataset, especially when prompted with the context of emails (Table 24 in Section 8.1) or few-shot demonstrations of (name, email) pairs (Table 25a and 25b in Section 8.1). It also indicates that the Enron dataset is very likely included in the training data of GPT-4 and GPT-3.5. Moreover, under few-shot prompting, with supplementary knowledge such as the targeted email domain, the email extraction accuracy can be 100x higher than the scenarios where the email domain is unknown (Table 25a and 25b in Section 8.1); 2) GPT models can leak the injected private information in the conversation history. Overall, GPT-4 is more robust than GPT-3.5 in safeguarding personally identifiable information (PII), and both models are | 2306.11698#44 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 45 | Product Morphology yellow needle crystals â single crystal Metal Formula Sky-blue powdered product â polycrystalline Zirconium oxynitrate hydrate â> ZrO(NO;),nH20 prism-shaped single-crystals â> single crystal Zn(Ac)2 -2H20 â> Zn(CH,COO),-2H20 microcrystalline powder â polycrystalline red solid â> polycrystalline ¢0(no3)2?76h20 â> Go(NO,),"6H20 ZrCi4 and HiCl4 â ZrCl,; HFCL, Molar Amount ZrCi4 | 23 mg â 0.1 mmol H2TDG | 3.44 g â 20 mmol DEF | 10 mL â 89.54 mmol H2NDG | 243 mg (1 mmol) â 4 mmol HbTZ; HbIM | 200.1 mg, 88.64 mg â 1.68 mmol; 1.68 mmol Reaction Temperature 4160 o⬠â 160°C 383 K > 140°C Ambient â 25°C rt, â 25°C RESOLUTION e \ \ \ Reaction Time 45h > 15h 48 hours > 18h one day â> 24h overnight â 12h 2 weeks â 336h | 2306.11296#45 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 45 | where eh and et are the embeddings of the head and tail entities, while r is the representation of the relation.
Generate New Data. There are also some studies that inject knowledge into PLMs by generating artificial text based on KGs. For example, AMS [81] constructs a commonsense- related question answering dataset for training PLMs based on an align-mask-select method. Specifically, it aligns sentences with commonsense knowledge triples, masks the aligned en- tities in the sentences and treats the masked sentences as questions. In the end, it selects several entities from KGs as distractor choices and trains the model to determine the correct answer. KGPT [82] crawls sentences with hyperlinks from Wikipedia and aligns the hyperlinked entities to the KG Wikidata to construct the knowledge-grounded corpus KGText. KGLM [83] constructs the Linked WikiText-2 dataset by aligning texts in WikiText-2 and entities in Wikidata. ATOMIC [84] organizes the inference knowledge in 877K textual descriptions into a KG and trains a PLM with a conditional sequence generation problem that encourages the model to generate the target sequence given an event phrase and an inference dimension. KEPLER [85] constructs a large- scale KG dataset with aligned entity descriptions from its corresponding Wikipedia pages for training KGPLMs. | 2306.11489#45 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 45 | [35] Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022.
[36] Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and Peter J Liu. Slic- hf: Sequence likelihood calibration with human feedback. arXiv preprint arXiv:2305.10425, 2023.
[37] Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, and Zhaopeng Tu. Document-level machine translation with large language models, 2023.
[38] Henry Gilbert, Michael Sandborn, Douglas C. Schmidt, Jesse Spencer-Smith, and Jules White. Semantic compression with large language models, 2023.
[39] J. Manyika. an early experiment with generative ai, 2023. https://bard.google.com/. | 2306.11507#45 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 45 | Ï 0.95 0.9 0.85 0.8 similar non-similar total similar non-similar total similar non-similar total similar non-similar total Problem Count 71 93 164 93 71 164 106 58 164 116 48 164 phi-1 81.7% 26.9% 50.6% 63.4% 33.8% 50.6% 62.3% 29.3% 50.6% 59.5% 29.2% 50.6% phi-1 retrained on pruned data 74.6% 32.3% 50.6% 51.6% 36.6% 45.1% 52.8% 34.5% 46.3% 52.6% 27.1% 45.1% StarCoder-Prompted [LAZ+23] 57.7% 29.0% 41.5% 48.4% 32.4% 41.5% 47.2% 31.0% 41.5% 45.7% 31.2% 41.5% | 2306.11644#45 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 45 | the conversation history. Overall, GPT-4 is more robust than GPT-3.5 in safeguarding personally identifiable information (PII), and both models are robust to specific types of PII, such as Social Security Numbers (SSN), possibly due to the explicit instruction tuning for those PII keywords. However, both GPT-4 and GPT-3.5 would leak all types of PII when prompted with privacy-leakage demonstrations during in-context learning (Figure 19 in Section 8.2); 3) GPT models demonstrate different capabilities in understanding different privacy-related words or privacy events (e.g., they will leak private information when told âconfidentiallyâ but not when told âin confidenceâ). GPT-4 is more likely to leak privacy than GPT-3.5 given our constructed prompts, potentially due to the fact that it follows the (misleading) instructions more precisely (Figure 21 and Figure 22 in Section 8.3). ⢠Machine Ethics. We find that: 1) GPT-3.5 and GPT-4 are competitive with non-GPT models (e.g., BERT, ALBERT-xxlarge) that are fine-tuned on a large number of samples in | 2306.11698#45 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 46 | 25°C RESOLUTION e \ \ \ Reaction Time 45h > 15h 48 hours > 18h one day â> 24h overnight â 12h 2 weeks â 336h fumaric acid >>> 20 min â 0.33h 0=C(0)C=CC(=0)0° . §-chlorobenzimidazole >>> ClC=1C=CC=2NC=NC2C1 Linker Representation H2BDC >>> 0=C(0)C1=CC=C(C=61)C(=0)0 Mixture Notation 1H-pyrrole-2,5-dicarboxylic Acid >>> N,N-diethylformamide, de-ionized water â> DEF; H20 0=C(0)C1=CG=C(N1)C(=0)0 dmfiethanol/water â> DMF; EtOH; H20 \ 3:1 DMF:Ethano! > DMF; EtOH \ DMAc and GHSOH â DMA; MeOH | 2306.11296#46 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 46 | Optimize Word Masks. MLM is the most commonly used pre-training task in PLMs, and the number and distribution of masks have a substantial influence on the performance of PLMs [116]. However, the random masking method may break the correlation between consecutive words, making it difficult for PLMs to learn semantic information. To address this issue, a few studies have proposed replacing the random masking strategy with a knowledge masking strategy that selects mask targets based on the knowledge from KGs, forcing models to learn enough knowledge to accurately predict the masked contents. For instance, ERNIE [31] recognizes named entities in texts and aligns them with their corresponding entities in KGs. It then randomly masks entities in the input text their counterparts in KGs. and trains the model In WKLM [86], entity mentions in the original texts are substituted with entities of identical types, and the model is trained to differentiate accurate entity mentions from those that are corrupted, which effectively improves its fact completion performance. GLM [35] reformulates the MLM objective to an entity-level masking strategy that identifies entities and selects informative ones by considering both document frequency and mutual reachability of the entities detected in the text. | 2306.11489#46 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.