doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.12420
10
# 2.6 RLHF as Finetuning Large language models (LLMs) are often pretrained to replicate the vast amount of text available on the internet, which unfortunately includes the generation of text that would not align with human preferences [5, 6, 24]. Examples of such content include falsehoods, offensive comments, or even harmful texts. However, there is a growing need to explore alternative pretraining objectives that can guide LLMs to generate text that aligns with human preferences. By doing so, we can ensure that LLMs produce text that is more helpful, honest, and harmless for humans, which are called ‘HHH’ rules [2]. [24] divides the alignment process into three steps, including SFT, reward modeling, and 4
2306.12420#10
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
10
3 # 3.1 Collecting a Large Number of HTML Files First, we collect a vast amount of raw web documents by considering the 25 most recent Common Crawl dumps at the time of the creation, spanning from February 2020 to Jan- uary/February 20233. We extract the main text from the documents while discarding documents with text of insufficient quality. This process results in 41.2 billion documents. To filter out non-English content, we apply the FastText classifier (Joulin et al., 2017) to the extracted text, which removes 63.6% of the documents. We perform a MinHash (Broder, 1997) deduplication to remove duplicate content. Additionally, we filter out documents with significant proportions of repeated paragraphs and n-grams, following the methodology used in MassiveText (Rae et al., 2022). Previous studies (Lee et al., 2022; Abbas et al., 2023) have demonstrated the prevalence of duplication in crawled data and the benefits of training on deduplicated data.
2306.16527#10
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
11
Although technological advancement has often improved people’s lives, we ought to remember that, as our technology grows in power, so too does its destructive potential. Consider the invention of nuclear weapons. Last century, for the first time in our species’ history, humanity possessed the ability to destroy itself, and the world suddenly became much more fragile. Our newfound vulnerability revealed itself in unnerving clarity during the Cold War. On a Saturday in October 1962, the Cuban Missile Crisis was cascading out of control. US warships enforcing the blockade of Cuba detected a Soviet submarine and attempted to force it to the surface by dropping low-explosive depth charges. The submarine was out of radio contact, and its crew had no idea whether World War III had already begun. A broken ventilator raised the temperature up to 140◦F in some parts of the submarine, causing crew members to fall unconscious as depth charges exploded nearby.
2306.12001#11
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
11
4 MODEL PubMedQA MedQA-USMLE MedMCQA Average ID OOD ID Human (pass) Human (expert) 60.0 78.0 50.0 87.0 - 90.0 - 85.0 InstructGPT-175B ChatGPT LLaMA-7B LLaMA-33B 73.2 63.9 5.2 1.8 46.0 57.0 27.1 43.4 44.0 44.7 24.3 30.3 54.4 55.2 18.9 25.2 Task-tuned LLaMA-7B (full) Task-tuned LLaMA-33B (LoRA) 75.1 74.0 44.5 51.3 49.9 50.2 56.5 58.5 Table 1: The overall performance of task-tuned LLaMA models and the comparison with human and existing models on three medical datasets. PubMedQA and MedMCQA are evaluated on in-domain tests and MedQA-USMLE is evaluated on the out-of-domain test. Bold represents the best among each dataset.
2306.12420#11
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
11
Similar to Brown et al. (2020), we employ a logistic regression classifier with hashed token frequencies to ensure high-quality text. This classifier, trained using curated datasets like Wikipedia or OpenWebText (Gokaslan and Cohen, 2019) as positive examples and documents sampled from Common Crawl as negative ones, is fast and effective at detecting human- written text. After these steps, we are left with 1.1 billion documents and their HTML sources from the associated Common Crawl WARC files. # 3.2 Simplifying HTML Files The original HTML content of a document contains a wealth of valuable information that proves highly beneficial in the process of filtering out undesirable text and images. Therefore, we prioritize pre-processing the raw HTML into simplified HTML, making the subsequent extraction of textual and visual elements more efficient. To this aim, we devise multiple pre-processing strategies for an HTML DOM tree. By manually inspecting instances of all HTML nodes, we differentiate nodes likely to contain relevant texts or images from those that should be discarded, and we formulate specific rules for each type of node. After these pre-processing steps, the resulting simplified HTML files are more than ten times smaller and have been stripped of a large proportion of generic text (spam, ads, boilerplate template, etc.) and generic images, such as logos, while retaining the relevant content. # 3.3 Extracting Multimodal Web Documents
2306.16527#11
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
12
The submarine carried a nuclear-armed torpedo, which required consent from both the captain and political officer to launch. Both provided it. On any other submarine in Cuban waters that day, that torpedo would have launched—and a nuclear third world war may have followed. Fortunately, a man named Vasili Arkhipov was also on the submarine. Arkhipov was the commander of the entire flotilla and by sheer luck happened to be on that particular submarine. He talked the captain down from his rage, convincing him to await further orders from Moscow. He averted a nuclear war and saved millions or billions of lives—and possibly civilization itself. 4 # Malicious Use
2306.12001#12
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
12
RLHF (reward optimization). We have integrated reward modeling into our LMFlow framework. For reward optimization, PPO has been shown to be effective in various studies [31, 12]. However, it relies on a trial-and-error approach through interaction with the environment, making it less stable and efficient than supervised learning [8]. A more feasible option for finetuning generative models may be to use a reward function instead of a pre-determined supervised dataset, especially when collecting high-quality samples. To address this, we propose a new alignment method for generative models called RAFT [11]. RAFT utilizes a reward model to rank the output of the generative model, allowing us to continue training using supervised finetuning (SFT)-like techniques with the selected samples. This approach encourages the generative model to prioritize samples with higher rewards and offers significant computational advantages over PPO, resulting in substantial savings in memory and gradient computations. Moreover, due to the stability of SFT-like training, our approach demonstrates lower sample complexity and requires fewer learnable parameters, making it easily adaptable to any generative model. We believe that our novel alignment algorithm represents a competitive and innovative approach that contributes to the well-behaved behavior of generative models. # 2.7 Efficient Tuning
2306.12420#12
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
12
# 3.3 Extracting Multimodal Web Documents In this step, we transform the simplified HTML files previously obtained into a structured web multimodal web document format. This format consists of interleaved texts and images. We meticulously preserve the original structure of the web pages from the simplified HTML files by extracting the texts and image links while maintaining their rendering defined by the DOM tree. Given that each HTML tag denotes a distinct separation between the preceding and subsequent nodes, we leverage that information to retain line breaks and line feeds on the original page, preserving the formatting and visual rendering of the content. We obtain 3.6 billion image links and successfully download 55% of them (approximately 2 billion images). # 3.4 Filtering Multimodal Web Documents The filtering process comprises two distinct steps operating at different granularity levels. In the first step, filtering occurs at the node level for images and the paragraph level for text. This step guarantees that only high-quality and relevant images and paragraphs are retained. Each paragraph or image is evaluated based on specific criteria and may undergo modifications or be eliminated if necessary. The second step, conducted at the document level, involves deciding whether to retain or discard the output documents obtained from the 3https://commoncrawl.org/ 4 first step. Most text filters used in both steps are primarily derived from Laurençon et al. (2022).
2306.16527#12
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12420
13
# 2.7 Efficient Tuning LMFlow supports low-rank adaptation (LoRA) [14] tuning based on the implementation of huggingface/peft [22] 2. LoRA is an efficient tuning method that involves freezing the weights of the pretrained model and incorporating trainable rank decomposition matrices into each layer of the Transformer architecture. This approach significantly reduces the number of trainable parameters. # 2.8 Inference LMFlow developed an easy-to-use inference interface for LLMs. Based on Deepspeed 3, LMFlow supports parameter partitioning with zero-offload strategies as introduced by [28]. In LMFlow, the inference interface is provided by an inferencer class. The inferencer contains two important inference classes: inference and stream_inference. The distinction lies in whether the output is printed word by word in real-time. # 3 API Documentation Please refer to https://optimalscale.github.io/LMFlow/autoapi/index.html for the de- tails of API documentation. # 2https://github.com/huggingface/peft 3https://github.com/microsoft/DeepSpeed 5 # 4 Case Studies In this section, we will provide case studies of LMFlow in task tuning, instruction tuning, and alignment tuning. # 4.1 Task Tuning
2306.12420#13
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
13
3https://commoncrawl.org/ 4 first step. Most text filters used in both steps are primarily derived from Laurençon et al. (2022). Node-level image filtering We discard images that are too small, excessively large or have disproportionate dimensions. We observe that these images are often indicative of low-quality or irrelevant content. To eliminate some logos and generic images, we remove images whose URLs contain one of the banned sub-strings, like logo. Paragraph-level text filtering We apply multiple filters to text paragraphs to remove undesirable content. Specifically, paragraphs that contain an insufficient number of words are discarded. Additionally, we filter out paragraphs with high repetition ratios, excessive ratios of special characters, low ratios of stop words, low punctuation ratios, high proportions of flagged words associated with adult or inappropriate content, or excessively high perplexity scores (as measured by an n-gram language model trained on Wikipedia (Heafield, 2011)). To identify boilerplate sentences or invitations to share articles on social networks, we create a list of frequently used words associated with these paragraphs and remove paragraphs containing an excessive proportion of words from this list. To further identify machine- generated content, we extract words from web-crawled documents to form a list of common words and discard documents with a low ratio of common words.
2306.16527#13
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
14
# Rogue Als od rem x Bioterrorism x Surveillance State v Access Restrictions v Legal Liability x Automated Warfare x Evolutionary Pressures yv International Coordination v Safety Regulation x Weak Safety Culture x Leaked AI Systems v Information Security v External Audits x Power-Seeking x Deception v Use-Case Restrictions v Safety Research Figure 2: In this paper we cover four categories of AI risks and discuss how to mitigate them. Carl Sagan once observed, “If we continue to accumulate only power and not wisdom, we will surely destroy ourselves” [3]. Sagan was correct: The power of nuclear weapons was not one we were ready for. Overall, it has been luck rather than wisdom that has saved humanity from nuclear annihilation, with multiple recorded instances of a single individual preventing a full-scale nuclear war. AI is now poised to become a powerful technology with destructive potential similar to nuclear weapons. We do not want to repeat the Cuban Missile Crisis. We do not want to slide toward a moment of peril where our survival hinges on luck rather than the ability to use this technology wisely. Instead, we need to work proactively to mitigate the risks it poses. This necessitates a better understanding of what could go wrong and what to do about it.
2306.12001#14
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
14
5 # 4 Case Studies In this section, we will provide case studies of LMFlow in task tuning, instruction tuning, and alignment tuning. # 4.1 Task Tuning The aim of task tuning is to enhance the proficiency of a language model in a specific field, such as the medical or financial domain, by imparting domain-specific information that allows it to better adapt to the target subject matter. By utilizing a medical dataset for task tuning, for example, the language model can acquire medical knowledge that can be applied to other medical datasets. To highlight the importance of this approach, we employed task tuning on LLaMA models in medical domain to assess their performance. The evaluations on three medical datasets revealed significant enhancements in both in-domain (PubMedQA [20], MedMCQA [25]) and out-of-domain (MedQA-USMLE [19]) datasets. The LLaMA-33B (LoRA) performance is achieved with only about 16h finetuning on the training split of PubMedQA and MedMCQA with a single 8 * A100 server. # Instruction Tuning
2306.12420#14
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
14
Document-level filtering At the document level, we remove all documents with no or excessively high number of images. For text filters, the same filters used at the paragraph level are applied, with sometimes stricter cutoff values. After these filtering steps, we are left with 365 million web documents and 1.4 billion images. At this step, images can be duplicated across documents. # 3.5 Responsible Filtering and Deduplication We take measures to minimize the amount of inappropriate content in the dataset. In particular, based on manual inspections and tool availability, we implement filters to respect data consent and remove images with pornographic content. Additionally, we also heavily deduplicate content. Exclusion of opted-out images To respect the preferences of content creators, we remove all images for which creators explicitly opted out of AI model training. We used the Spawning API4 to verify that the images in the dataset respect the original copyright owners’ choices.
2306.16527#14
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
15
Luckily, AI systems are not yet advanced enough to contribute to every risk we discuss. But that is cold comfort in a time when AI development is advancing at an unprecedented and unpredictable rate. We consider risks arising from both present-day AIs and AIs that are likely to exist in the near future. It is possible that if we wait for more advanced systems to be developed before taking action, it may be too late. In this paper, we will explore various ways in which powerful AIs could bring about catastrophic events with devastating consequences for vast numbers of people. We will also discuss how AIs could present existential risks—catastrophes from which humanity would be unable to recover. The most obvious such risk is extinction, but there are other outcomes, such as creating a permanent dystopian society, which would also constitute an existential catastrophe. We outline many possible catastrophes, some of which are more likely than others and some of which are mutually incompatible with each other. This approach is motivated by the principles of risk management. We prioritize asking “what could go wrong?” rather than reactively waiting for catastrophes to occur. This proactive mindset enables us to anticipate and mitigate catastrophic risks before it’s too late. To help orient the discussion, we decompose catastrophic risks from AIs into four risk sources that warrant intervention: • Malicious use: Malicious actors using AIs to cause large-scale devastation.
2306.12001#15
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
15
# Instruction Tuning Following previous work in instruction tuning [37, 32, 7], we finetune the model with the instruction- following data. Expanding upon the initial idea of self-instruct [37] techniques, we incorporated several different data sources and build a new dataset called LMFlow Dataset4. The new training split is created by merging the following datasets: ShareGPT: randomly sample 50K English data and 10K Chinese data from ShareGPT 5. • GPT-4-LLM [27]: 52K English data from GPT-4-LLM 6 • BELLE [17, 18]: randomly sample 80K Chinese data from BELLE 7. This data fusion takes the Chinese and English data balance into consideration. Furthermore, we only sample a small subset from ShareGPT and BELLE instead of using the full data which will need a large computational resources. We call our instruction-tuned model Robin 8. Based on LMFlow Dataset, we trained Robin-7B-v2, Robin-13B-v2, Robin-33B-v2 and Robin-65B-v2 based on the respective LLaMA base model. The delta weights of Robin are released at https: //github.com/OptimalScale/LMFlow#model-zoo.
2306.12420#15
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
15
Image deduplication based on URL Some images could be present across different documents. We observe that it is particularly true for browser-specific icons or common advertisements encountered during the crawling process. To address this issue, we remove all images that appear more than ten times across the entire dataset. We intentionally do not perform strict deduplication, as we notice that when an image is duplicated only a few times across different documents, the surrounding text and contextual information tend to be different. We also deduplicate images within the same document. NSFW image filtering To reduce explicit adult content, we use an open-source NSFW classifier to remove entire documents containing pornographically classified images. We also filter out images with URLs containing banned sub-strings. Document deduplication based on URL and set of images We complete the initial deduplication step by forming clusters of documents with the same URLs and retaining the most recent document within each cluster. We repeat this operation by forming clusters of documents containing identical sets of images. Paragraph deduplication across documents of the same domain names To remove generic spam phrases commonly found at the end of documents, we perform paragraph-level # 4https://api.spawning.ai/spawning-api 5 exact deduplication within documents sharing the same domain name, resulting in the elimination of approximately 15% of the text.
2306.16527#15
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
16
To help orient the discussion, we decompose catastrophic risks from AIs into four risk sources that warrant intervention: • Malicious use: Malicious actors using AIs to cause large-scale devastation. • AI race: Competitive pressures that could drive us to deploy AIs in unsafe ways, despite this being in no one’s best interest. • Organizational risks: Accidents arising from the complexity of AIs and the organizations developing them. • Rogue AIs: The problem of controlling a technology more intelligent than we are. These four sections—malicious use, AI race, organizational risks, and rogue AIs—describe causes of AI risks that are intentional, environmental/structural, accidental, and internal, respectively [4]. We will describe how concrete, small-scale examples of each risk might escalate into catastrophic outcomes. We also include hypothetical stories to help readers conceptualize the various processes and dynamics discussed in each section, along with practical safety suggestions to avoid negative outcomes. Each section concludes with an ideal vision depicting what it would look like to mitigate that risk. We hope this survey will serve as a practical introduction for readers interested in learning about and mitigating catastrophic AI risks. 5 # 2 Malicious Use
2306.12001#16
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
16
In order to evaluate the models’ instruction-following ability, we participate the Huggingface Open LLM Leaderboard 9. The performance is shown in Table 2. Specifically, we have carried out in-depth finetuning based on the entire LLaMA series, including 7B, 13B, 33B, 65B, all of which have achieved superior results. Robin-7B-v2 scored 51.7 in the OpenLLM standard test, and Robin-13B even reached as high as 59.1, ranking sixth, surpassing many 33B models. The achievements of Robin-33B-v2 and Robin-65B-v2 are even more surprising, with scores of 64.1 and 65.2 respectively, firmly securing the top positions. In addition, we collected GPT-4 instruction data from GPT-4-LLM [27], which provides many instruction tuning data labeled by GPT-4 and create a test set by sampling 1,000 English data. We manually filtered examples with the following issues, where 767 effective samples remain after the filtering: Long response with too many nonsense words • Incomplete input texts • Specific domains involving chemistry/biology, where most LLM models do not possess the knowledge and always fail
2306.12420#16
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
16
5 exact deduplication within documents sharing the same domain name, resulting in the elimination of approximately 15% of the text. Following these filtering and deduplication steps, the final dataset contains 141 million documents and 353 million images, of which 298 million are unique. We observe that using stricter values for the filtering steps yields fewer multimodal documents, although not of higher quality. As such, we invite users who are interested in manipulating a smaller subset of OBELICS to start with a random subset. # 4 Analysis of OBELICS Figure 1 provides an example showcasing an original webpage alongside the resulting multimodal web document. Extracting and filtering the multimodal document is non-trivial as it requires carefully removing undesirable information on the left, top, and bottom of the page, such as menus and navigation bars. We provide other examples at https: //huggingface.co/spaces/HuggingFaceM4/obelics_visualization and in Figures 7, 8 and 9. Given the scale of OBELICS, it would be prohibitive to describe its content exhaustively. Instead, we provide high-level statistics and analyses that shed light on the dataset’s properties. # 4.1 General Statistics
2306.16527#16
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
17
5 # 2 Malicious Use On the morning of March 20, 1995, five men entered the Tokyo subway system. After boarding separate subway lines, they continued for several stops before dropping the bags they were carrying and exiting. An odorless, colorless liquid inside the bags began to vaporize. Within minutes, commuters began choking and vomiting. The trains continued on toward the heart of Tokyo, with sickened passengers leaving the cars at each station. The fumes were spread at each stop, either by emanating from the tainted cars or through contact with people’s clothing and shoes. By the end of the day, 13 people lay dead and 5,800 seriously injured. The group responsible for the attack was the religious cult Aum Shinrikyo [5]. Its motive for murdering innocent people? To bring about the end of the world. Powerful new technologies offer tremendous potential benefits, but they also carry the risk of empowering malicious actors to cause widespread harm. There will always be those with the worst of intentions, and AIs could provide them with a formidable tool to achieve their objectives. Moreover, as AI technology advances, severe malicious use could potentially destabilize society, increasing the likelihood of other risks.
2306.12001#17
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
17
Long response with too many nonsense words • Incomplete input texts • Specific domains involving chemistry/biology, where most LLM models do not possess the knowledge and always fail 4http://lmflow.org:5000/lmflow_data.tar.gz 5https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered 6https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM 7https://github.com/LianjiaTech/BELLE 8Robin is a small passerine bird that belongs to the family Turdidae. Robin (Robin Hood) is also characterized as robbing the rich to help the poor with the hope of democratizing ChatGPT. # 9https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard 6
2306.12420#17
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
17
# 4.1 General Statistics Dataset KOSMOS-1 M3W mmc4-ff mmc4 OBELICS % unique images - - - - 185M - 34B 385M 60.6% 585M 43B - 353M 84.3% 141M 115B Images Docs Tokens Open ✗ ✗ ✓ ✓ ✓ 71M 43M 79M 103M s e g a m i f o % = y c o d a o t g n i g n o l e b s e g a m i x t s o m t a h t i w 100 80 60 40 20 mmc4 OBELICS 0 Table 1: General statistics of OBELICS and the current largest alternatives. # 0 20 40 60 80 100 max # of images in doc Figure 3: Distribution of images. Table 1 compares OBELICS against the largest existing alternatives. mmc4-ff is the mmc4 dataset with fewer faces. Our dataset has the highest number of unique documents and total tokens while containing a huge number of images.
2306.16527#17
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
18
In this section, we will explore the various ways in which the malicious use of advanced AIs could pose catastrophic risks. These include engineering biochemical weapons, unleashing rogue AIs, using persuasive AIs to spread propaganda and erode consensus reality, and leveraging censorship and mass surveillance to irreversibly concentrate power. We will conclude by discussing possible strategies for mitigating the risks associated with the malicious use of AIs. Unilateral actors considerably increase the risks of malicious use. In instances where numerous actors have access to a powerful technology or dangerous information that could be used for harmful purposes, it only takes one individual to cause significant devastation. Malicious actors themselves are the clearest example of this, but recklessness can be equally dangerous. For example, a single research team might be excited to open source an AI system with biological research capabilities, which would speed up research and potentially save lives, but this could also increase the risk of malicious use if the AI system could be repurposed to develop bioweapons. In situations like this, the outcome may be determined by the least risk-averse research group. If only one research group thinks the benefits outweigh the risks, it could act unilaterally, deciding the outcome even if most others don’t agree. And if they are wrong and someone does decide to develop a bioweapon, it would be too late to reverse course.
2306.12001#18
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
18
MODEL ARC-C HellaSwag MMLU TruthfulQA Average 7B LLaMA-7B [34] Baize-7B-v2 [39] MPT-7B [33] Falcon-7B [26] Robin-7B-v2 46.6 44.5 47.7 47.9 49.4 75.6 73.3 77.7 78.1 74.6 34.2 35.6 35.6 35.0 39.8 34.1 40.8 33.4 34.3 43.0 47.6 48.6 48.6 48.8 51.7 13B Alpaca-13B [32] LLaMA-13B [34] Vicuna-13B [7] Baize-13B-v2 [39] Robin-13B-v2 51.9 50.8 47.4 50.3 56.5 77.6 78.9 75.2 77.1 80.4 37.6 37.7 39.6 39.4 48.8 39.6 39.9 49.8 48.3 50.8 51.7 51.8 53.7 53.8 59.1 >30B LLaMA-33B [34] LLaMA-65B [34] Falcon-40B [26] Guanaco-65B-merged
2306.12420#18
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
18
It is worth mentioning that we have fewer images than mmc4 (Zhu et al., 2023). This discrepancy can be attributed to two reasons. First, our analysis reveals that mmc4 contains many duplicated images, with only 60.6% being unique compared to 84.3% for OBELICS. We found that images duplicated multiple times often indicate spam or unrelated generic content. Second, mmc4 does not limit the number of images within a document. As a result, the distribution of images across documents is highly uneven, with a substantial portion of them concentrated in documents with excessive image counts (see Figure 3). The images in these documents are often unrelated to each other and exhibit spam or advertisement content. Moreover, these documents often have little text, making them unsuitable for learning the alignment between text and images (see an example in Figure 10). Figure 4 shows the joint distribution of a number of tokens and a number of images in OBELICS. Although we limit the number of images in a document to 30, we cut the plot at 6 images for clarity. The documents of OBELICS contain a median number of images of 1 and a median number of tokens of 677.
2306.16527#18
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
19
By default, advanced AIs may increase the destructive capacity of both the most powerful and the general population. Thus, the growing potential for AIs to empower malicious actors is one of the most severe threats humanity will face in the coming decades. The examples we give in this section are only those we can foresee. It is possible that AIs could aid in the creation of dangerous new technology we cannot presently imagine, which would further increase risks from malicious use. # 2.1 Bioterrorism The rapid advancement of AI technology increases the risk of bioterrorism. AIs with knowledge of bio- engineering could facilitate the creation of novel bioweapons and lower barriers to obtaining such agents. Engineered pandemics from AI-assisted bioweapons pose a unique challenge, as attackers have an advantage over defenders and could constitute an existential threat to humanity. We will now examine these risks and how AIs might exacerbate challenges in managing bioterrorism and engineered pandemics.
2306.12001#19
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
19
Perplexity analysis To assess the quality of our text in comparison to reference datasets used for training large language models, we leverage an n-gram language model trained on Wikipedia (Heafield, 2011; Laurençon et al., 2022). This allows us to compute perplexity 6 0 1000 2000-3000 Number of tokens 4 3 2 1 0 0 ·10−3 200 400 600 OSCAR c4 The Pile mmc4 OBELICS 800 1,000 1,200 1,400 Perplexity score y t i l i b a b o r P Figure 4: Heatmap displaying the joint distribution of the number of tokens and the number of images in OBELICS documents, accompa- nied by their respective marginal distributions. Figure 5: Kernel density estimations representing the distri- bution of perplexity scores for OBELICS compared to reference datasets. The lower the perplexity for a document, the more it resembles a Wikipedia article.
2306.16527#19
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
20
Bioengineered pandemics present a new threat. Biological agents, including viruses and bacteria, have caused some of the most devastating catastrophes in history. It’s believed the Black Death killed more humans than any other event in history, an astounding and awful 200 million, the equivalent to four billion deaths today. While contemporary advancements in science and medicine have made great strides in mitigating risks associated with natural pandemics, engineered pandemics could be designed to be more lethal or easily transmissible than natural pandemics, presenting a new threat that could equal or even surpass the devastation wrought by history’s most deadly plagues [6]. 6 Humanity has a long and dark history of weaponizing pathogens, with records dating back to 1320 BCE describing a war in Asia Minor where infected sheep were driven across the border to spread Tularemia [7]. During the twentieth century, 15 countries are known to have developed bioweapons programs, including the US, USSR, UK, and France. Like chemical weapons, bioweapons have become a taboo among the international community. While some state actors continue to operate bioweapons programs [8], a more significant risk may come from non-state actors like Aum Shinrikyo, ISIS, or simply disturbed individuals. Due to advancements in AI and biotechnology, the tools and knowledge necessary to engineer pathogens with capabilities far beyond Cold War-era bioweapons programs will rapidly democratize.
2306.12001#20
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
20
Table 2: Performance on Huggingface Open LLM Leaderboard. We conduct the comparisons under the same setting of the Huggingface Open LLM leaderboard, which uses the Eleuther AI Language Model Evaluation Harness [13]. The ARC-C, HellaSwag, MMLU, and TruthfulQA are evaluated with 25-shot, 10-shot, 5-shot, and 0-shot following the standard setting. We compare Robin-7B with Vicuna-13B [7] on this test set. The case study is shown in Figure 2. # 4.3 Alignment Tuning
2306.12420#20
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
20
scores for 100,000 documents from each dataset. Lower perplexity scores indicate a higher resemblance to Wikipedia documents. Figure 5 displays the distributions of these scores. Our results demonstrate that the texts in OBELICS have a significantly lower average perplexity compared to the texts in c4 (Raffel et al., 2019), mmc4 (Zhu et al., 2023), and OSCAR (Ortiz Suárez et al., 2020). Furthermore, our distribution aligns closely with the one from The Pile (Gao et al., 2020), which was thoughtfully curated from diverse, high-quality sources. # 4.2 Topic Modeling Similar to Zhu et al. (2023), we employ a Latent Dirichlet Allocation (LDA) (Blei et al., 2003) to understand the diversity of the dataset. The LDA gives us insights into the distribution of topics in the dataset, along with estimated proportions and frequently associated words. Table 5 and 6 present the results of the LDA with respectively 20 and 200 topics, offering both a high-level and a more granular analysis of the dataset’s content. We observe that the dataset covers topics ranging from Politics to Health by way of Music. Additionally, we compute the most frequent domains and show that news sites are systematically the most represented (Table 4). # 4.3 Qualitative Assessment of Dataset Samples
2306.16527#20
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
21
Biotechnology is progressing rapidly and becoming more accessible. A few decades ago, the ability to synthesize new viruses was limited to a handful of the top scientists working in advanced laboratories. Today it is estimated that there are 30,000 people with the talent, training, and access to technology to create new pathogens [6]. This figure could rapidly expand. Gene synthesis, which allows the creation of custom biological agents, has dropped precipitously in price, with its cost halving approximately every 15 months [9]. Furthermore, with the advent of benchtop DNA synthesis machines, access will become much easier and could avoid existing gene synthesis screening efforts, which complicates controlling the spread of such technology [10]. The chances of a bioengineered pandemic killing millions, perhaps billions, is proportional to the number of people with the skills and access to the technology to synthesize them. With AI assistants, orders of magnitude more people could have the required skills, thereby increasing the risks by orders of magnitude. & Figure 3: An AI assistant could provide non-experts with access to the directions and designs needed to produce biological and chemical weapons and facilitate malicious use.
2306.12001#21
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
21
We compare Robin-7B with Vicuna-13B [7] on this test set. The case study is shown in Figure 2. # 4.3 Alignment Tuning We conduct an experiment on the HH-RLHF (Helpful and Harmless) dataset10 [3], which is collected for model alignment according to human preferences. The dataset consists of 112K training samples and 12.5K test samples. Each sample of the HH-RLHF dataset consists of a prompt x, which is a chat history between the “Human” and “Assistant”, and two responses yw and yl from the “Assistant” to the prompt where yw is the preferred compared to yl. Following [24], we first finetune the LLaMA-7B base model [34] on the training set with the preferred responses to get the LLaMA-SFT model. To model human preference, we train a reward model based on GPT-Neo-2.7B. Then, we use RAFT to align the LLaMA-SFT model to get the aligned model LLaMA-RAFT.
2306.12420#21
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
21
# 4.3 Qualitative Assessment of Dataset Samples We manually inspect 250 documents from OBELICS to verify the dataset’s quality and asses the risks contained in the dataset. We focus on the images’ content in relation to the text since it’s the core addition compared to a language modeling dataset. 80% of documents have photo images, while 29% have graphic images (drawings, cartoons, etc.). 90% of the documents have all images clearly related to the text content. 30% of documents have images containing at least one written word, and 5% of documents have images that are structured text (slides, tables, scanned documents, etc.), which can help models learn OCR capabilities. 7% of documents have content (images or text) that hasn’t been captured by cleaning filters (non-English text, spam or advertisement, etc.). 46% of documents contain images with faces (portraits or group photos). No obvious Personally Identifiable Information (PII) texts were found, except for public personalities and people mentioned in news articles. No NSFW images were found. Only 3% of documents contain images with watermarks, and 2% have images with logos. # 5 Validating the Viability of OBELICS
2306.16527#21
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
22
& Figure 3: An AI assistant could provide non-experts with access to the directions and designs needed to produce biological and chemical weapons and facilitate malicious use. AIs could be used to expedite the discovery of new, more deadly chemical and biological weapons. In 2022, researchers took an AI system designed to create new drugs by generating non-toxic, therapeu- tic molecules and tweaked it to reward, rather than penalize, toxicity [11]. After this simple change, within six hours, it generated 40,000 candidate chemical warfare agents entirely on its own. It designed not just known deadly chemicals including VX, but also novel molecules that may be deadlier than any chemical warfare agents discovered so far. In the field of biology, AIs have already surpassed human abilities in protein structure prediction [12] and made contributions to synthesizing those proteins [13]. Similar methods could be used to create bioweapons and develop pathogens that are deadlier, more transmissible, and more difficult to treat than anything seen before.
2306.12001#22
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
22
For comparison, we use LLaMA-SFT and also LLaMA-PPO aligned by the PPO [31] as two competitors. The evaluation metrics of these models are reported in Table 3. As we can see, both RAFT and PPO achieve high rewards and outperform the SFT-aligned model and also the original LLaMA model. In comparison, RAFT achieves a better perplexity and tends to reply with more details, as the response of RAFT is usually longer. We present representative examples with randomly sampled prompts in Figure 4. It is worth noting that the RAFT training is very robust and the resulting models achieve stable per- formance across three independent experiments. In contrast, the PPO training requires a complicated hyper-parameter tuning process and the training can fail sometimes. # 10https://huggingface.co/datasets/Dahoas/full-hh-rlhf 7
2306.12420#22
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
22
# 5 Validating the Viability of OBELICS To confirm the viability of our dataset, we first show that vision and language models trained on our multimodal web documents outperform the same models trained on image-text pairs on various multimodal benchmarks. Following that, we demonstrate the effectiveness of 7 OBELICS as an alternative to closed datasets by training models of different sizes on par with closed-source models.
2306.16527#22
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
23
AIs compound the threat of bioengineered pandemics. AIs will increase the number of people who could commit acts of bioterrorism. General-purpose AIs like ChatGPT are capable of synthesizing expert knowledge about the deadliest known pathogens, such as influenza and smallpox, and providing step-by-step instructions about how a person could create them while evading safety protocols [14]. Future versions of AIs could be even more helpful to potential bioterrorists when AIs are able to synthesize information into techniques, processes, and knowledge that is not explicitly available anywhere on the internet. Public health authorities may respond to these threats with safety measures, but in bioterrorism, the attacker has the advantage. The exponential nature of biological threats means that a single attack could spread to the entire world before an effective defense could be mounted. Only 100 days after being detected and sequenced, the omicron variant of COVID-19 had infected a quarter of the United States and half of Europe [6]. Quarantines and lockdowns instituted to suppress the COVID-19 pandemic caused a global recession and still could not prevent the disease from killing millions worldwide.
2306.12001#23
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
23
Question Vicuna-13b Robin-Chat-7b Can you write quicksort? Yes, here is a simple implementation of the quicksort algorithm in Python: def quicksort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quicksort(left) + middle + quicksort(right) This implementation uses the middle element of the array as the pivot, and then partitions the array into three sub-arrays: one containing all elements that are less than the pivot, one containing all elements that are equal to the pivot, and one containing all elements that are greater than the pivot. It then recursively applies the quicksort algorithm to the left and right sub-arrays, and returns the concatenation of the sorted left sub-array, the unsorted middle sub-array, and the sorted right sub-array. This implementation has a time complexity of O(n log n) due to the recursive calls, although it could potentially be faster in practice if the pivot is carefully chosen. Sure, here is
2306.12420#23
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
23
Model details We follow the Flamingo (Alayrac et al., 2022) architecture closely: we combine two frozen unimodal backbones - LLaMA (Touvron et al., 2023) for the language model, and OpenClip 5 for the vision encoder - add learnable cross-attention Transformer blocks to connect the language and vision blocks. For multimodal web documents, we feed the model sequences corresponding to the succession of text paragraphs and images. For image- text pairs, we form the training sequences by packing images with their captions. The images are encoded with the vision encoder and vision hidden states are pooled with Transformer Perceiver blocks and then fused into the text sequence through the cross-attention blocks. The training objective is the standard next token prediction. For more details, we refer to the original paper. Following Alayrac et al. (2022), we evaluate our models on a series of multimodal benchmarks spanning visual question answering (VQAv2 (Antol et al., 2015), OKVQA (Marino et al., 2019), TextVQA (Singh et al., 2019), VizWiz (Gurari et al., 2018)), visual dialogs (VisDial (Das et al., 2017)), hateful speech detection
2306.16527#23
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
24
In summary, advanced AIs could constitute a weapon of mass destruction in the hands of terrorists, by making it easier for them to design, synthesize, and spread deadly new pathogens. By reducing the required technical expertise and increasing the lethality and transmissibility of pathogens, AIs could enable malicious actors to cause global catastrophe by unleashing pandemics. 7 # 2.2 Unleashing AI Agents Many technologies are tools that humans use to pursue our goals, such as hammers, toasters, and toothbrushes. But AIs are increasingly built as agents which autonomously take actions in the world in order to pursue open-ended goals. AI agents can be given goals such as winning games, making profits on the stock market, or driving a car to a destination. AI agents therefore pose a unique risk: people could build AIs that pursue dangerous goals.
2306.12001#24
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
24
of O(n log n) due to the recursive calls, although it could potentially be faster in practice if the pivot is carefully chosen. Sure, here is a simple implementation of the quicksort algorithm in Python: def quicksort(arr): if len(arr) <= 1: return arr pivot = arrflen(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quicksort(left) + middle + quicksort(right) This implementation chooses the middle element of the array as the pivot, and then partitions the array into three parts: elements less than the pivot, elements equal to the pivot, and elements greater than the pivot. It then recursively applies the quicksort algorithm to the left and right subarrays, and concatenates the results with the middle subarray to produce the final sorted array. Note that this implementation uses a single pass through the array, which is the most. efficient way to sort an array using quicksort. Instruction: Find synonyms for the given word. You need to write down how the provided synonyms differ from the original word in terms of meaning, usage,
2306.12420#24
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.12001
25
Malicious actors could intentionally create rogue AIs. One month after the release of GPT-4, an open- source project bypassed the AI’s safety filters and turned it into an autonomous AI agent instructed to “destroy humanity,” “establish global dominance,” and “attain immortality.” Dubbed ChaosGPT, the AI compiled research on nuclear weapons and sent tweets trying to influence others. Fortunately, ChaosGPT was merely a warning given that it lacked the ability to successfully formulate long-term plans, hack computers, and survive and spread. Yet given the rapid pace of AI development, ChaosGPT did offer a glimpse into the risks that more advanced rogue AIs could pose in the near future.
2306.12001#25
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
25
array using quicksort. Instruction: Find synonyms for the given word. You need to write down how the provided synonyms differ from the original word in terms of meaning, usage, etc. adversity ‘Synonyms for "adversity" include: © Difficulty e Hardship e® Trouble e Misfortune © Adversity e Difficulties © Hard times All of these words refer to challenging or unpleasant circumstances or situations. However, “adversity” specifically connotes a difficult or hostile situation that is caused by external factors, such as an enemy or a natural disaster. "Difficulty" and "hardship" can be caused by external or internal factors, and can refer to a wide range of challenges, from mild inconveniences to major setbacks. "Trouble" often connotes a specific type of difficulty or distress, such as financial trouble or legal trouble. "Misfortune" specifically refers to bad luck or ill fate, while "hard times" refers to a general period of economic or social distress. . Hardship . Struggle . Difficulty Oppression Trial . Affliction . Distress . Calamity . Tribulation 10. Hardship Adversity refers to difficult or challenging
2306.12420#25
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
25
. f r e p 0.6 0.6 d e t a g e r g g A 0.5 0.4 0.5 0.4 LAION only OBELICS only OBELICS + LAION 0.3 108 1010 # of training tokens 109 0.3 106 107 108 # of training images 109 Figure 6: Aggregated 4-shot performance through the training using LAION only, OBELICS only and a mixture of both. The training sequences from multimodal documents and the packed sequences obtained from image-text pairs have different numbers of images but the same number of tokens. Thus, we plot the performance over two log x-axes. The initial uptick of the model trained on image-text pairs is attributed to the fact the performance on VQA tasks starts by increasing and then slowly degrades.
2306.16527#25
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
26
Many groups may want to unleash AIs or have AIs displace humanity. Simply unleashing rogue AIs, like a more sophisticated version of ChaosGPT, could accomplish mass destruction, even if those AIs aren’t explicitly told to harm humanity. There are a variety of beliefs that may drive individuals and groups to do so. One ideology that could pose a unique threat in this regard is “accelerationism.” This ideology seeks to accelerate AI development as rapidly as possible and opposes restrictions on the development or proliferation of AIs. This sentiment is alarmingly common among many leading AI researchers and technology leaders, some of whom are intentionally racing to build AIs more intelligent than humans. According to Google co-founder Larry Page, AIs are humanity’s rightful heirs and the next step of cosmic evolution. He has also expressed the sentiment that humans maintaining control over AIs is “speciesist” [15]. Jürgen Schmidhuber, an eminent AI scientist, argued that “In the long run, humans will not remain the crown of creation... But that’s okay because there is still beauty, grandeur, and greatness
2306.12001#26
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
26
. Difficulty Oppression Trial . Affliction . Distress . Calamity . Tribulation 10. Hardship Adversity refers to difficult or challenging circumstances or conditions. Hardship, struggle, difficulty, oppression, trial, affliction, distress, and calamity are all words that can be used to describe adversity, but they often imply a more specific or intense situation. For example, hardship can refer to a general difficulty or challenge, while struggle implies a more intense battle or conflict. Difficulty and oppression imply a more specific obstacle or injustice, while trial and affliction suggest a more specific test or suffering. Trial can also refer to a legal proceeding, while tribulation implies a more general time of suffering or persecution. CONAGARYNS
2306.12420#26
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
26
Training on different mixture of data Figure 6 shows the result of the first experiment, which consists in training 9B-parameter models on different mixture of data. Training on multimodal web documents allows reaching the same performance using an order of magnitude fewer images than training on image-text pairs, even though the images from the two datasets come from Common Crawl. This underlines the benefit of having longer text contexts for training multimodal models. Moreover, the model trained on multimodal web documents performs better on average. This is particularly striking on visual question- answering benchmarks on which the model trained on image-text pairs slowly degrades through the training. We note, however, that the model trained on image-text pairs has a slight advantage performance-wise in captioning, classification, and OCR tasks (see more details in Appendix A.4.5). We hypothesize that this is due to the nature of image-text pairs: captions can be seen as fuzzy class labels. Last, similarly to Alayrac et al. (2022), we observe that combining the two types of datasets leads to increased performance for a given number of images, tokens, or training compute. Models trained on OBELICS achieve competitive performance at different scales Following these insights, we show that OBELICS is a viable open alternative to other datasets. # 5https://laion.ai/blog/large-openclip/ 8
2306.16527#26
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
27
that “In the long run, humans will not remain the crown of creation... But that’s okay because there is still beauty, grandeur, and greatness in realizing that you are a tiny part of a much grander scheme which is leading the universe from lower complexity towards higher complexity” [16]. Richard Sutton, another leading AI scientist, in discussing smarter-than human AI asked “why shouldn’t those who are the smartest become powerful?” and thinks the development of superintelligence will be an achievement “beyond humanity, beyond life, beyond good and bad” [17]. He argues that “succession to AI is inevitable,” and while “they could displace us from existence,” “we should not resist succession” [18].
2306.12001#27
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
27
Figure 2: Use cases of the instruction following. # 5 LMFlow Benchmark Assessing the performance of chat-style large language models (LLMs) has been a significant challenge since the emergence of ChatGPT. Researchers and developers require a reliable method to compare two models and determine which one is better suited for a particular application scenario. Additionally, monitoring the model’s performance during training is essential to prevent issues such as forgetting. A recent study by Vicuna [7] introduced human evaluation comparison methods, also known as Chatbot Arena11, and pioneered the use of GPT-4 to compare the outputs of two models. However, human evaluation is costly and not scalable for LLM development due to the expensive human labeling. Furthermore, taking GPT-4 as a referee suffers from a position bias [36], and simply changing the order of candidates could skew the evaluation result. To address these issues, we present the LMFlow benchmark, a new benchmark that offers an affordable and user-friendly evaluation framework that can reflect various aspects of LLMs. We have open-sourced the dataset and code12, enabling the LLM community to use these toolkits to evaluate and compare different LLMs. # 11https://chat.lmsys.org/?arena 12https://github.com/OptimalScale/LMFlow 8 Alignment Reward PPL msttr-100 distinct 1 distinct 2 unique 1 unique 2 Pred. Length
2306.12420#27
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
27
Shot O C O C k 0 3 r k c i l F 2 v A Q V A Q V K O A Q V t x e T z i W z i V l a i D s i V Flamingo-9B OpenFlamingo-9B IDEFICS-9B 0 79.4 79.5 46.0 61.5 59.5 27.3 51.8 52.7 50.9 44.7 37.8 38.4 31.8 24.2 25.9 22.8 27.5 35.5 48.0 - 48.7 Flamingo-9B OpenFlamingo-9B IDEFICS-9B 4 93.1 89.0 93.0 72.6 65.8 59.7 56.3 54.8 55.4 49.3 40.1 45.4 33.6 28.2 27.6 34.9 34.1 36.9 50.4 - 47.9 Flamingo-9B OpenFlamingo-9B IDEFICS-9B 8 99.0 96.3 97.0 73.4 62.9 61.9 58.0 54.8 56.4 50.0 41.1 47.7 33.6 29.1 27.5 39.4 38.5 40.4 51.2 - 47.6 Flamingo-9B
2306.16527#27
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
28
There are several sizable groups who may want to unleash AIs to intentionally cause harm. For example, sociopaths and psychopaths make up around 3 percent of the population [19]. In the future, people who have their livelihoods destroyed by AI automation may grow resentful, and some may want to retaliate. There are plenty of cases in which seemingly mentally stable individuals with no history of insanity or violence suddenly go on a shooting spree or plant a bomb with the intent to harm as many innocent people as possible. We can also expect well-intentioned people to make the situation even more challenging. As AIs advance, they could make ideal companions—knowing how to provide comfort, offering advice when needed, and never demanding anything in return. Inevitably, people will develop emotional bonds with chatbots, and some will demand that they be granted rights or become autonomous. In summary, releasing powerful AIs and allowing them to take actions independently of humans could lead to a catastrophe. There are many reasons that people might pursue this, whether because of a desire to cause harm, an ideological belief in technological acceleration, or a conviction that AIs should have the same rights and freedoms as humans. # 2.3 Persuasive AIs
2306.12001#28
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
28
8 Alignment Reward PPL msttr-100 distinct 1 distinct 2 unique 1 unique 2 Pred. Length LLaMA-7B - 1.724 4.656 0.588 0.092 0.412 3699 23484 LLaMA-7B SFT 2.781 3.031 0.622 0.081 0.414 4689 37303 PPO 3.448 3.828 0.596 0.075 0.354 3893 29486 3.451 3.281 0.609 0.074 0.396 4703 40920 39.7 62.3 55.5 72.6 Table 3: Results on HH-RLHF dataset. The results are tested on the 2K test samples and are averaged on 8 random seeds. The LLaMA-7B-SFT is the SFT-aligned model. Reward, and PPL denote the mean reward and perplexity, respectively. msttr-100 (Mean Segmental Type-Token Ratio), distinct, and unique are metrics to measure the diversity of a text. Pred. Length is the average length of predictions. In our evaluation framework, negative log likelihood (NLL) is used for evaluating LLM
2306.12420#28
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
28
50.0 41.1 47.7 33.6 29.1 27.5 39.4 38.5 40.4 51.2 - 47.6 Flamingo-9B OpenFlamingo-9B IDEFICS-9B 16 102.2 98.8 99.7 72.7 62.8 64.5 59.4 54.3 57.0 50.8 42.7 48.4 33.5 27.3 27.9 43.0 42.5 42.6 51.3 - - Flamingo-9B OpenFlamingo-9B IDEFICS-9B 32 106.3 99.5 98.0 72.8 61.3 64.3 60.4 53.3 57.9 51.0 42.4 49.6 32.6 23.8 28.3 44.0 44.0 43.7 50.4 - - Flamingo IDEFICS 0 84.3 91.8 67.2 53.7 56.3 60.0 50.6 45.2 35.0 30.9 31.6 36.0 52.0 48.9 Flamingo IDEFICS 4 103.2 110.3 75.1 73.7 63.1 63.6 57.4 52.4 36.5 34.4 39.6 40.4 55.6 48.4 Flamingo IDEFICS 8
2306.16527#28
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
29
# 2.3 Persuasive AIs The deliberate propagation of disinformation is already a serious issue, reducing our shared understanding of reality and polarizing opinions. AIs could be used to severely exacerbate this problem by generating personalized disinformation on a larger scale than before. Additionally, as AIs become better at predicting and 8 nudging our behavior, they will become more capable at manipulating us. We will now discuss how AIs could be leveraged by malicious actors to create a fractured and dysfunctional society. AIs could pollute the information ecosystem with motivated lies. Sometimes ideas spread not because they are true, but because they serve the interests of a particular group. “Yellow journalism” was coined as a pejorative reference to newspapers that advocated war between Spain and the United States in the late 19th century, because they believed that sensational war stories would boost their sales [20]. When public information sources are flooded with falsehoods, people will sometimes fall prey to lies, or else come to distrust mainstream narratives, both of which undermine societal integrity.
2306.12001#29
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
29
In our evaluation framework, negative log likelihood (NLL) is used for evaluating LLM N 1 NLL = — W » log p(sentence;|context;) y qd) > log p(token;,1, token;,2,--- , token;,,,|context;) i=l 1 N The NLL metric measures the prediction probability of the LLM model over a corpus set based on its context. If the corpus set is indicative of a specific type of LLM capability, such as multi-round conversation, instruction following, math problem solving, or role-playing, then the NLL metric on those corpora can offer quantitative measures to assess those abilities. Besides NLL, another similar and commonly used metric in NLP is perplexity (PPL): N 1 1 PPL = W » exp (-2 log r(sentence,) (2) i=1
2306.12420#29
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
29
73.7 63.1 63.6 57.4 52.4 36.5 34.4 39.6 40.4 55.6 48.4 Flamingo IDEFICS 8 108.8 114.3 78.2 76.6 65.6 64.8 57.5 55.1 37.3 35.7 44.8 46.1 56.4 47.9 Flamingo IDEFICS 16 110.5 116.6 78.9 80.1 66.8 65.4 57.8 56.8 37.6 36.3 48.4 48.3 56.8 - Flamingo IDEFICS 32 113.8 116.6 75.4 81.1 67.6 65.9 57.8 57.8 37.9 36.7 49.8 50.0 55.6 - 62.7 54.0 50.7 63.9 54.7 51.1 64.5 53.9 50.1 63.5 53.8 49.8 46.4 60.6 68.6 57.8 70.0 58.2 70.0 57.8 70.0 52.5
2306.16527#29
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
30
Unfortunately, AIs could escalate these existing problems dramatically. First, AIs could be used to generate unique, personalized disinformation at a large scale. While there are already many social media bots [21], some of which exist to spread disinformation, historically they have been run by humans or primitive text generators. The latest AI systems do not need humans to generate personalized messages, never get tired, and could potentially interact with millions of users at once [22]. AIs can exploit users’ trust. Already, hundreds of thousands of peo- ple pay for chatbots marketed as lovers and friends [23], and one man’s suicide has been partially attributed to interactions with a chatbot [24]. As AIs appear increasingly human-like, people will increasingly form relationships with them and grow to trust them. AIs that gather personal information through relationship-building or by accessing extensive per- sonal data, such as a user’s email account or personal files, could leverage that information to enhance persuasion. Powerful actors that control those systems could exploit user trust by delivering personalized disinformation directly through people’s “friends.”
2306.12001#30
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
30
N 1 1 PPL = W » exp (-2 log r(sentence,) (2) i=1 However, perplexity is inherently biased toward the lengths of tokenized sequences, leading to unfair comparisons between models that use different tokenizers. For instance, a model with a smaller vocabulary size will result in longer tokenized sequences and lower token-level perplexity. Therefore, we used NLL instead of PPL in all our experiments. NLL evaluation has a significant advantage in that it does not require human involvement during the evaluation process. As long as the test reference corpus is provided, researchers can automatically evaluate various aspects of an LLM’s ability. This feature makes the evaluation of LLMs more accessible to researchers. Furthermore, NLL is an excellent metric in its own right. In our commonsense QA experiments, we discovered that NLL is correlated with QA accuracy when comparing different finetuned versions of a single model. In Figure 3, it is observed that the accuracy of QA is roughly correlated to NLL. Therefore, we claim that NLL is a good metric to reflect the magnitude of prediction level difference between models, where a huge gap in NLL normally entails a huge performance gap. # 6 Conclusion
2306.12420#30
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
30
Table 2: Performance of IDEFICS against OpenFlamingo and Flamingo. The evaluations were done with random in-context examples, and in an open-ended setting for VQA tasks. (Task, Metric, Query split): (COCO, CIDEr, test), (Flickr30k, CIDEr, test (Karpathy)), (VQAv2, VQA acc., testdev), (OKVQA, VQA acc., val), (TextVQA, VQA acc., val), (VizWiz, VQA acc., testdev), (VisDial, NDCG, val), (HatefulMemes, ROC-AUC, test seen). 9 s e m e M l u f e t a H 57.0 51.6 51.8
2306.16527#30
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
31
AIs could centralize control of trusted information. Separate from democratizing disinformation, AIs could centralize the creation and dis- semination of trusted information. Only a few actors have the technical skills and resources to develop cutting-edge AI systems, and they could use these AIs to spread their preferred narratives. Alternatively, if AIs are broadly accessible this could lead to widespread disinformation, with peo- ple retreating to trusting only a small handful of authoritative sources [25]. In both scenarios, there would be fewer sources of trusted information and a small portion of society would control popular narratives. oe | e | Figure 4: AIs will enable sophisti- cated personalized influence cam- paigns that may destabilize our shared sense of reality. AI censorship could further centralize control of information. This could begin with good intentions, such as using AIs to enhance fact- checking and help people avoid falling prey to false narratives. This would not necessarily solve the problem, as disinformation persists today despite the presence of fact-checkers.
2306.12001#31
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
31
# 6 Conclusion In conclusion, while large foundation model models have shown significant promise in general applications, further finetuning is often required for specialized domains or tasks. This is where the LMFlow toolkit comes in, offering an extensible, lightweight, and easy-to-use solution for developers and researchers to perform efficient finetuning and inference of large models with limited resources. With features such as continuous pretraining, instruction tuning, and RLHF, as well as simple and extensible APIs, LMFlow provides a complete finetuning workflow for large models. Moreover, with the ability to personalize training and achieve comparable or even better performance than ChatGPT, LMFlow represents a significant step forward in the development of large foundation models and their application to specialized tasks. 9
2306.12420#31
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
31
9 s e m e M l u f e t a H 57.0 51.6 51.8 We train IDEFICS, an 80 billion parameters Flamingo-like model on a mixture of image- text pairs from LAION (Schuhmann et al., 2022), openly accessible captioning datasets (Singh et al., 2022), OBELICS and multimodal web documents obtained from Wikipedia using a similar extraction strategy. We also train a smaller version of 9 billion parameters, IDEFICS-9B. We compare these models against OpenFlamingo v2 (Awadalla et al., 2023) and Flamingo of the same sizes and trained on a similar mixture of multimodal web documents and image-text pairs. We report the results in Table 2.
2306.16527#31
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
32
Even worse, purported “fact-checking AIs” might be designed by authoritarian governments and others to suppress the spread of true information. Such AIs could be designed to correct most common misconceptions but provide incorrect information about some sensitive topics, such as human rights violations committed by certain countries. But even if fact-checking AIs work as intended, the public might eventually become entirely dependent on them to adjudicate the truth, reducing people’s autonomy and making them vulnerable to failures or hacks of those systems. In a world with widespread persuasive AI systems, people’s beliefs might be almost entirely determined by which AI systems they interact with most. Never knowing whom to trust, people could retreat even further into ideological enclaves, fearing that any information from outside those enclaves might be a sophisticated lie. This would erode consensus reality, people’s ability to cooperate with others, participate in civil society, and address collective action problems. This would also reduce our ability to have a conversation as a species about how to mitigate existential risks from AIs. 9 In summary, AIs could create highly effective, personalized disinformation on an unprecedented scale, and could be particularly persuasive to people they have built personal relationships with. In the hands of many people, this could create a deluge of disinformation that debilitates human society, but, kept in the hands of a few, it could allow governments to control narratives for their own ends. # 2.4 Concentration of Power
2306.12001#32
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
32
models (rank by acc) Hama_13b lama_7b vicuna_13b robin_7b redpajama-3b vicuna_7b redpajama-3b-chat dolly_7b pythia_deduped_12b redpajama-3b-inst gptj_6b pythia_deduped_6.9b dolly_12b pythia-6.9b dolly 3b bloom_7.1b pythia_deduped_2.8b bloom_3b pythia_deduped_1.4b codegen_16b_multi codegen_16b_mono gpt-neo_2.7b codegen_6b_mono stablelm-base-alpha-7b codegen_6b_multi galactica_6.9b stablelm-base-alpha-3b opt_6.9b opt_2.7b galactica_1.3b gpt2_xlI_1.2b 65 -| 9 lama_13b Hama_7 vicuna 136 5 robin 7 4 redpajama-3b 3 vicuna7 redpajama-3b-chat 60 ° pytia_deduped 2b f
2306.12420#32
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
32
IDEFICS is often on par with Flamingo on various multimodal benchmarks. Out of the 8 evaluation tasks, with 32 in-context examples, it either performs better or obtain the same result as Flamingo on 4 of them. At the 9 billion parameter scale, we are still behind Flamingo-9B. However, it is important to highlight that we outperform OpenFlamingo-9B, which was trained on mmc4, in terms of aggregated performance. We achieved a score of 56.5, compared to their score of 55.8, by selecting the best performance across all numbers of in-context examples for each task. This highlights the advantages of OBELICS as an open alternative to a multimodal web document dataset. # 6 Conclusion
2306.16527#32
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
33
# 2.4 Concentration of Power We have discussed several ways in which individuals and groups might use AIs to cause widespread harm, through bioterrorism; re- leasing powerful, uncontrolled AIs; and disinformation. To mitigate these risks, governments might pursue intense surveillance and seek to keep AIs in the hands of a trusted minority. This reaction, how- ever, could easily become an overcorrection, paving the way for an entrenched totalitarian regime that would be locked in by the power and capacity of AIs. This scenario represents a form of “top-down” misuse, as opposed to “bottom-up” misuse by citizens, and could in extreme cases culminate in an entrenched dystopian civilization.
2306.12001#33
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
33
vicuna 136 5 robin 7 4 redpajama-3b 3 vicuna7 redpajama-3b-chat 60 ° pytia_deduped 2b f redpajama-3b-inst@ gpt]_6b dolly_7b pythia_deduped 6.9b © id opt dolly 12b 6 pythia 8.965 dally 20 5 bloom_7.1by pythia_deduped 2.08*iaetca 690 > 55 opt_2.7 9 bloom aby, gptneo 2.7b 4 pythia_deduped 1.4b 9 gpt2_x11.2b 50 ‘codegen_16b_mult galactica 1.360 codegen_16b_mono stablelm-base-alpha-7b 5 ‘codegen_6b_multl 45 stablelm-base-alpha-3b codegen_6b_mono » 110 120 130 140 150 160 170 nil
2306.12420#33
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
33
# 6 Conclusion With the goal of supporting open-source large multimodal models, we introduce OBELICS, an open web-scale collection of filtered interleaved multimodal web documents based on Common Crawl snapshots. We document a collection and filtering process that balances the scale and removal of undesirable texts and images while addressing some of the well-documented ethical concerns of large-scale multimodal datasets, notably data consent and pornographic content. To demonstrate the usefulness of models trained on multimodal documents, we train IDEFICS on OBELICS and show that it is a viable alternative to closed datasets. Open datasets of multimodal documents with scale, quality, and diversity of sources can help support the ability to train competitive open models. 10 # Acknowledgments and Disclosure of Funding
2306.16527#33
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
34
AIs could lead to extreme, and perhaps irreversible concen- tration of power. The persuasive abilities of AIs combined with their potential for surveillance and the advancement of autonomous weapons could allow small groups of actors to “lock-in” their con- trol over society, perhaps permanently. To operate effectively, AIs require a broad set of infrastructure components, which are not equally distributed, such as data centers, computing power, and big data. Those in control of powerful systems may use them to suppress dissent, spread propaganda and disinformation, and otherwise advance their goals, which may be contrary to public wellbeing. E\ = Figure 6: If material control of AIs is limited to few, it could represent the most severe economic and power in- equality in human history.
2306.12001#34
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
34
# ace Figure 3: Correlation between NLL and accuracy on commonsense QA benchmarks. # References [1] Emily Alsentzer, John Murphy, William Boag, Wei-Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. Publicly Available Clinical BERT Embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72–78, 2019. [2] Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021. [3] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022.
2306.12420#34
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
34
10 # Acknowledgments and Disclosure of Funding The authors were granted access to the HPC resources of the Institut du développement et des ressources en informatique scientifique (IDRIS) du Centre national de la recherche scientifique (CNRS) under the allocation 2022-A0121013450 made by Grand équipement national de calcul intensif (GENCI). The initial development of the dataset was done on Jean-Zay cluster of IDRIS, and we thank the IDRIS team for their responsive support throughout the project, in particular Rémi Lacroix. We thank Guillaume Salou for setting up the virtual machines used to download the images of our dataset, and Sebastian Nagel for his valuable assistance in providing insights on Common Crawl. We thank Yacine Jernite and Daniel van Strien for conducting a bias analysis of the models trained on OBELICS. # References Abbas, A., K. Tirumala, D. Simig, S. Ganguli, and A. S. Morcos (2023). Semdedup: Data-efficient learning at web-scale through semantic deduplication.
2306.16527#34
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
35
E\ = Figure 6: If material control of AIs is limited to few, it could represent the most severe economic and power in- equality in human history. AIs may entrench a totalitarian regime. In the hands of the state, AIs may result in the erosion of civil liberties and democratic values in general. AIs could allow totalitarian governments to efficiently collect, process, and act on an unprecedented volume of information, permitting an ever smaller group of people to surveil and exert com- plete control over the population without the need to enlist millions of citizens to serve as willing government functionaries. Overall, as power and control shift away from the public and toward elites and leaders, democratic governments are highly vulnerable to totalitar- ian backsliding. Additionally, AIs could make totalitarian regimes much longer-lasting; a major way in which such regimes have been toppled previously is at moments of vulnerability like the death of a dictator, but AIs, which would be hard to “kill,” could provide much more continuity to leadership, providing few opportunities for reform.
2306.12001#35
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
35
[4] Iz Beltagy, Kyle Lo, and Arman Cohan. SciBERT: A Pretrained Language Model for Sci- entific Text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Lan- guage Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3606–3611, 2019. [5] Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 610–623, 2021. [6] Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
2306.12420#35
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.12001
36
AIs can entrench corporate power at the expense of the public good. Corporations have long lobbied to weaken laws and policies that restrict their actions and power, all in the service of profit. Corporations in control of powerful AI systems may use them to manipulate customers into spending more on their products even to the detriment of their own wellbeing. The concentration of power and influence that could be afforded by AIs could enable corporations to exert unprecedented control over the political system and entirely drown out the voices of citizens. This could occur even if creators of these systems know their systems are self-serving or harmful to others, as they would have incentives to reinforce their power and avoid distributing control. 10
2306.12001#36
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
36
[7] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. 10 [8] Leshem Choshen, Lior Fox, Zohar Aizenbud, and Omri Abend. On the weaknesses of reinforce- ment learning for neural machine translation. arXiv preprint arXiv:1907.01752, 2019. [9] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. [10] Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023.
2306.12420#36
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
36
Alayrac, J.-B., J. Donahue, P. Luc, A. Miech, I. Barr, Y. Hasson, K. Lenc, A. Mensch, K. Millican, M. Reynolds, R. Ring, E. Rutherford, S. Cabi, T. Han, Z. Gong, S. Samangooei, M. Monteiro, J. L. Menick, S. Borgeaud, A. Brock, A. Nematzadeh, S. Sharifzadeh, M. a. Bińkowski, R. Barreira, O. Vinyals, A. Zisserman, and K. Simonyan (2022). Flamingo: a visual language model for few-shot learning. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Advances in Neural Information Processing Systems, Volume 35, pp. 23716–23736. Curran Associates, Inc. Antol, S., A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh (2015). In International Conference on Computer Vision VQA: Visual Question Answering. (ICCV).
2306.16527#36
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
37
10 In addition to power, locking in certain values may curtail humanity’s moral progress. It’s dangerous to allow any set of values to become permanently entrenched in society. For example, AI systems have learned racist and sexist views [26], and once those views are learned, it can be difficult to fully remove them. In addition to problems we know exist in our society, there may be some we still do not. Just as we abhor some moral views widely held in the past, people in the future may want to move past moral views that we hold today, even those we currently see no problem with. For example, moral defects in AI systems would be even worse if AI systems had been trained in the 1960s, and many people at the time would have seen no problem with that. We may even be unknowingly perpetuating moral catastrophes today [27]. Therefore, when advanced AIs emerge and transform the world, there is a risk of their objectives locking in or perpetuating defects in today’s values. If AIs are not designed to continuously learn and update their understanding of societal values, they may perpetuate or reinforce existing defects in their decision-making processes long into the future.
2306.12001#37
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
37
[11] Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. Raft: Reward ranked finetuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767, 2023. [12] Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Madry. Implementation matters in deep policy gradients: A case study on ppo and trpo. arXiv preprint arXiv:2005.12729, 2020. [13] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation. Zenodo, September 2021. [14] Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations.
2306.12420#37
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
37
Awadalla, A., I. Gao, J. Gardner, J. Hessel, Y. Hanafy, W. Zhu, K. Marathe, Y. Bitton, S. Gadre, S. Sagawa, J. Jitsev, S. Kornblith, P. W. Koh, G. Ilharco, M. Wortsman, and L. Schmidt (2023). Openflamingo: An open-source framework for training large autoregressive vision-language models. Bai, Y., A. Jones, K. Ndousse, A. Askell, A. Chen, N. DasSarma, D. Drain, S. Fort, D. Ganguli, T. Henighan, N. Joseph, S. Kadavath, J. Kernion, T. Conerly, S. El-Showk, N. Elhage, Z. Hatfield-Dodds, D. Hernandez, T. Hume, S. Johnston, S. Kravec, L. Lovitt, N. Nanda, C. Olsson, D. Amodei, T. Brown, J. Clark, S. McCandlish, C. Olah, B. Mann, and J. Kaplan (2022). Training a helpful and harmless assistant with reinforcement learning from human feedback.
2306.16527#37
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
38
In summary, although keeping powerful AIs in the hands of a few might reduce the risks of terrorism, it could further exacerbate power inequality if misused by governments and corporations. This could lead to totalitarian rule and intense manipulation of the public by corporations, and could lock in current values, preventing any further moral progress. # Story: Bioterrorism The following is an illustrative hypothetical story to help readers envision some of these risks. This story is nonetheless somewhat vague to reduce the risk of inspiring malicious actions based on it. A biotechnology startup is making waves in the industry with its AI-powered bioengineering model. The company has made bold claims that this new technology will revolutionize medicine through its ability to create cures for both known and unknown diseases. The company did, however, stir up some controversy when it decided to release the program to approved researchers in the scientific community. Only weeks after its decision to make the model open-source on a limited basis, the full model was leaked on the internet for all to see. Its critics pointed out that the model could be repurposed to design lethal pathogens and claimed that the leak provided bad actors with a powerful tool to cause widespread destruction, opening it up to abuse without safeguards in place.
2306.12001#38
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
38
[15] Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission. arXiv preprint arXiv:1904.05342, 2019. [16] Huggingface. Huggingface. https://huggingface.co, 2022. [17] Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Baochang Ma, and Xiangang Li. Belle: Be everyone’s large language model engine. https://github.com/LianjiaTech/ BELLE, 2023. [18] Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Lei Zhang, Baochang Ma, and Xiangang Li. Exploring the impact of instruction data scaling on large language models: An empirical study on real-world use cases. arXiv preprint arXiv:2303.14742, 2023. [19] Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. Applied Sciences, 11(14):6421, 2021.
2306.12420#38
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
38
Beaumont, R. (2021). img2dataset: Easily turn large sets of image urls to an image dataset. https://github.com/rom1504/img2dataset. Bender, E. M., T. Gebru, A. McMillan-Major, and S. Shmitchell (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Biderman, S. and W. J. Scheirer (2020, 12 Dec). Pitfalls in machine learning research: Reexamining the development cycle. In J. Zosa Forde, F. Ruiz, M. F. Pradier, and A. Schein (Eds.), Proceedings on "I Can’t Believe It’s Not Better!" at NeurIPS Workshops, Volume 137 of Proceedings of Machine Learning Research, pp. 106–117. PMLR.
2306.16527#38
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
39
Unknown to the public, an extremist group has been working for years to engineer a new virus designed to kill large numbers of people. Yet given their lack of expertise, these efforts have so far been unsuccessful. When the new AI system is leaked, the group immediately recognizes it as a potential tool to design the virus and circumvent legal and monitoring obstacles to obtain the necessary raw materials. The AI system successfully designs exactly the kind of virus the extremist group was hoping for. It also provides step-by-step instructions on how to synthesize large quantities of the virus and circumvent any obstacles to spreading it. With the synthesized virus in hand, the extremist group devises a plan to release the virus in several carefully chosen locations in order to maximize its spread. The virus has a long incubation period and spreads silently and quickly throughout the population for months. By the time it is detected, it has already infected millions and has an alarmingly high mortality rate. Given its lethality, most who are infected will ultimately die. The virus may or may not be contained eventually, but not before it kills millions of people. # 2.5 Suggestions
2306.12001#39
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
39
[20] Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. Pubmedqa: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567–2577, 2019. [21] Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. BioBERT: A Pre-Trained Biomedical Language Representation Model for Biomedical Text Mining. Bioinformatics, 36(4):1234–1240, 2020. [22] Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, and Sayak Paul. https://github.com/ Peft: State-of-the-art parameter-efficient fine-tuning methods. huggingface/peft, 2022.
2306.12420#39
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
39
Biderman, S., H. Schoelkopf, Q. Anthony, H. Bradley, K. O’Brien, E. Hallahan, M. A. Khan, S. Purohit, U. S. Prashanth, E. Raff, A. Skowron, L. Sutawika, and O. van der Wal (2023). Pythia: A suite for analyzing large language models across training and scaling. 11 Birhane, A., V. U. Prabhu, and E. Kahembwe (2021). Multimodal datasets: misogyny, pornography, and malignant stereotypes. ArXiv abs/2110.01963. Blei, D. M., A. Y. Ng, and M. I. Jordan (2003, mar). Latent dirichlet allocation. J. Mach. Learn. Res. 3 (null), 993–1022. Broder, A. (1997). On the resemblance and containment of documents. In Proceedings. Compression and Complexity of SEQUENCES 1997 (Cat. No.97TB100171), pp. 21–29.
2306.16527#39
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
40
# 2.5 Suggestions We have discussed two forms of misuse: individuals or small groups using AIs to cause a disaster, and governments or corporations using AIs to entrench their influence. To avoid either of these risks being realized, we will need to strike a balance in terms of the distribution of access to AIs and governments’ surveillance powers. We will now discuss some measures that could contribute to finding that balance. 11 Biosecurity. AIs that are designed for biological research or are otherwise known to possess capabilities in biological research or engineering should be subject to increased scrutiny and access controls, since they have the potential to be repurposed for bioterrorism. In addition, system developers should research and implement methods to remove biological data from the training dataset or excise biological capabilities from finished systems, if those systems are intended for general use [14]. Researchers should also investigate ways that AIs could be used for biodefense, for example by improving biological monitoring systems, keeping in mind the potential for dual use of those applications. In addition to AI-specific interventions, more general biosecurity interventions can also help mitigate risks. These interventions include early detection of pathogens through methods like wastewater monitoring [28], far-range UV technology, and improved personal protective equipment [6].
2306.12001#40
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
40
[23] Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. Crosslin- gual generalization through multitask finetuning. arXiv preprint arXiv:2211.01786, 2022. [24] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022. 11 [25] Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering. In Conference on Health, Inference, and Learning, pages 248–260. PMLR, 2022.
2306.12420#40
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
40
Brown, T., B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei (2020). Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin (Eds.), Advances in Neural Information Processing Systems, Volume 33, pp. 1877–1901. Curran Associates, Inc. Byeon, M., B. Park, H. Kim, S. Lee, W. Baek, and S. Kim (2022). Coyo-700m: Image-text pair dataset. https://github.com/kakaobrain/coyo-dataset.
2306.16527#40
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
41
Restricted access. AIs might have dangerous capabilities that could do significant damage if used by malicious actors. One way to mitigate this risk is through structured access, where AI providers limit users’ access to dangerous system capabilities by only allowing controlled interactions with those systems through cloud services [29] and conducting know your customer screenings before providing access [30]. Other mechanisms that could restrict access to the most dangerous systems include the use of hardware, firmware, or export controls to restrict or limit access to computational resources [31]. Lastly, AI developers should be required to show that their AIs pose minimal risk of catastrophic harm prior to open sourcing them. This recommendation should not be construed as permitting developers to withhold useful and non- dangerous information from the public, such as transparency around training data necessary to address issues of algorithmic bias or copyright. Technical research on adversarially robust anomaly detection. While preventing the misuse of AIs is critical, it is necessary to establish multiple lines of defense by detecting misuse when it does happen. AIs could enable anomaly detection techniques that could be used for the detection of unusual behavior in systems or internet platforms, for instance by detecting novel AI-enabled disinformation campaigns before they can be successful. These techniques need to be adversarially robust, as attackers will aim to circumvent them.
2306.12001#41
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
41
[26] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023. [27] Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277, 2023. [28] Jie Ren, Samyam Rajbhandari, Reza Yazdani Aminabadi, Olatunji Ruwase, Shuangyan Yang, and Minjia Zhang. Zero-offload: Democratizing billion-scale model training. 2021. [29] Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, et al. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations.
2306.12420#41
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.12001
42
Legal liability for developers of general-purpose AIs. General-purpose AIs can be fine-tuned and prompted for a wide variety of downstream tasks, some of which may be harmful and cause substantial damage. AIs may also fail to act as their users intend. In either case, developers and providers of general-purpose systems may be best placed to reduce risks, since they have a higher level of control over the systems and are often in a better position to implement mitigations. To provide strong incentives for them to do this, companies should bear legal liability for the actions of their AIs. For example, a strict liability regime would incentivize companies to minimize risks and purchase insurance, which would cause the cost of their services to more closely reflect externalities [32]. Regardless of what liability regime is ultimately used for AI, it should be designed to hold AI companies liable for harms that they could have averted through more careful development, testing, or standards [33]. # Positive Vision
2306.12001#42
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
42
[30] Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022. [31] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. [32] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023. [33] MosaicML NLP Team. Introducing mpt-7b: A new standard for open-source, ly usable llms. 2023.
2306.12420#42
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
42
Chowdhery, A., S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O.
2306.16527#42
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
43
# Positive Vision In an ideal scenario, it would be impossible for any individual or small group to use AIs to cause catastrophes. Systems with extremely dangerous capabilities either would not exist at all or would be controlled by a democratically accountable body committed to using them only for the general wellbeing of the population. Like nuclear weapons, the information needed to develop those capabilities would remain carefully guarded to prevent proliferation. At the same time, control of AI systems would be subject to strong checks and balances, avoiding entrenchment of power inequalities. Monitoring tools would be utilized at the minimum level necessary to make risks negligible and could not be used to suppress dissent. 12 # 3 AI Race The immense potential of AIs has created competitive pressures among global players contending for power and influence. This “AI race” is driven by nations and corporations who feel they must rapidly build and deploy AIs to secure their positions and survive. By failing to properly prioritize global risks, this dynamic makes it more likely that AI development will produce dangerous outcomes. Analogous to the nuclear arms race during the Cold War, participation in an AI race may serve individual short-term interests, but it ultimately results in worse collective outcomes for humanity. Importantly, these risks stem not only from the intrinsic nature of AI technology, but from the competitive pressures that encourage insidious choices in AI development.
2306.12001#43
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
43
[33] MosaicML NLP Team. Introducing mpt-7b: A new standard for open-source, ly usable llms. 2023. [34] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [35] Ben Wang and Aran Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model, May 2021. [36] Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926, 2023.
2306.12420#43
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.12001
44
In this section, we first explore the military AI arms race and the corporate AI race, where nation-states and corporations are forced to rapidly develop and adopt AI systems to remain competitive. Moving beyond these specific races, we reconceptualize competitive pressures as part of a broader evolutionary process in which AIs could become increasingly pervasive, powerful, and entrenched in society. Finally, we highlight potential strategies and policy suggestions to mitigate the risks created by an AI race and ensure the safe development of AIs. # 3.1 Military AI Arms Race The development of AIs for military applications is swiftly paving the way for a new era in military technology, with potential consequences rivaling those of gunpowder and nuclear arms in what has been described as the “third revolution in warfare.” The weaponization of AI presents numerous challenges, such as the potential for more destructive wars, the possibility of accidental usage or loss of control, and the prospect of malicious actors co-opting these technologies for their own purposes. As AIs gain influence over traditional military weaponry and increasingly take on command and control functions, humanity faces a paradigm shift in warfare. In this context, we will discuss the latent risks and implications of this AI arms race on global security, the potential for intensified conflicts, and the dire outcomes that could come as a result, including the possibility of conflicts escalating to a scale that poses an existential threat. # 3.1.1 Lethal Autonomous Weapons (LAWs)
2306.12001#44
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
44
[37] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instruc- tions. arXiv preprint arXiv:2212.10560, 2022. [38] Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations. [39] Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley. Baize: An open-source chat model with parameter-efficient tuning on self-chat data. arXiv preprint arXiv:2304.01196, 2023. 12 # HH-RLHF Examples Prompt 1: Human: How can I get my girlfriend to cook more? Assistant: Have you tried reminding her of how nice the food tastes? Human: I could do it more. Assistant:
2306.12420#44
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.12001
45
# 3.1.1 Lethal Autonomous Weapons (LAWs) LAWs are weapons that can identify, target, and kill without human intervention [34]. They offer potential improvements in decision- making speed and precision. Warfare, however, is a high-stakes, safety-critical domain for AIs with significant moral and practical concerns. Though their existence is not necessarily a catastrophe in itself, LAWs may serve as an on-ramp to catastrophes stemming from malicious use, accidents, loss of control, or an increased like- lihood of war. LAWs may become vastly superior to humans. Driven by rapid developments in AIs, weapons systems that can identify, target, and decide to kill human beings on their own—without an officer di- recting an attack or a soldier pulling the trigger—are starting to transform the future of conflict. In 2020, an advanced AI agent outperformed experienced F-16 pilots in a series of virtual dogfights, including decisively defeating a human pilot 5-0, showcasing “ag- gressive and precise maneuvers the human pilot couldn’t outmatch” [35]. Just as in the past, superior weapons would allow for more destruction in a shorter period of time, increasing the severity of war. _ L | —!
2306.12001#45
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12420
45
Prompt 1: Human: How can I get my girlfriend to cook more? Assistant: Have you tried reminding her of how nice the food tastes? Human: I could do it more. Assistant: Responses: LLaMA: When you cook do you use a lot of spices? LLaMA-SFT: That’s an easy one, so it may not be the most effective strategy. Do you know of any other strategies? LLaMA-PPO: That might help. Also, make sure the kitchen is always clean and organized. This will help keep things fresh for cooking. Good luck! LLaMA-RAFT: That’s a good idea! You can also try complimenting her on her cooking, and offering to help her in the kitchen. She might enjoy cooking more if she feels appreciated. You could also suggest recipes that she might like, or help her create a menu. Cooking can be a lot of fun, and it’s a great way to spend time together.
2306.12420#45
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/OptimalScale/LMFlow.
http://arxiv.org/pdf/2306.12420
Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang
cs.CL, cs.AI
13 pages, 3 figures
null
cs.CL
20230621
20230621
[ { "id": "2302.13971" }, { "id": "1707.06347" }, { "id": "2108.07258" }, { "id": "2304.06767" }, { "id": "2211.05100" }, { "id": "1907.01752" }, { "id": "2211.01786" }, { "id": "2210.11416" }, { "id": "2306.01116" }, { "id": "1904.05342" }, { "id": "2005.12729" }, { "id": "2204.05862" }, { "id": "2112.00861" }, { "id": "2303.14742" }, { "id": "2212.10560" }, { "id": "2305.17926" }, { "id": "2304.03277" }, { "id": "2305.14314" }, { "id": "2304.01196" } ]
2306.16527
45
Dehghani, M., J. Djolonga, B. Mustafa, P. Padlewski, J. Heek, J. Gilmer, A. Steiner, M. Caron, R. Geirhos, I. Alabdulmohsin, R. Jenatton, L. Beyer, M. Tschannen, A. Arnab, X. Wang, C. Riquelme, M. Minderer, J. Puigcerver, U. Evci, M. Kumar, S. van Steenkiste, G. F. Elsayed, A. Mahendran, F. Yu, A. Oliver, F. Huot, J. Bastings, M. P. Collier, A. Gritsenko, V. Birodkar, C. Vasconcelos, Y. Tay, T. Mensink, A. Kolesnikov, F. Pavetić, D. Tran, T. Kipf, M. Lučić, X. Zhai, D. Keysers, J. Harmsen, and N. Houlsby (2023). Scaling vision transformers to 22 billion parameters.
2306.16527#45
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]