doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.12001
97
AI development could catch us off guard too. In fact, it often does. The defeat of Lee Sedol by AlphaGo in 2016 came as a sur- prise to many experts, as it was widely believed that achieving such a feat would still require many more years of development. More recently, large language models such as GPT-4 have demonstrated spontaneously emergent capabilities [82]. On existing tasks, their performance is hard to predict in advance, often jumping up without warning as more resources are dedicated to training them. Further- more, they often exhibit astonishing new abilities that no one had previously anticipated, such as the capacity for multi-step reason- ing and learning on-the-fly, even though they were not deliberately taught these skills. This rapid and unpredictable evolution of AI capabilities presents a significant challenge for preventing accidents. After all, it is difficult to control something if we don’t even know what it can do or how far it may exceed our expectations.
2306.12001#97
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
97
By applying these node-level and paragraph-level filters, we ensure that only high-quality and relevant images and paragraphs are retained for further processing and analysis. Document-level filtering For document-level filtering, we start by removing all docu- ments with no images or with more than 30 images. We have found that when there are too many images in a document, they are often not related to each other, and are more likely to be considered as spam. For text filters, we use the same filters as for filtering at paragraph level. Since we are at the document level, the filter metrics are more precise, and we can typically set stricter cutoff values while limiting the number of false positives. The cutoff values used are also present in Table 3. After these filtering steps, we obtained 365 million web documents and 1.4 billion images (potentially duplicated in different documents at this stage). # A.1.5 Additional Filtering and Deduplication Steps Exclusion of opted-out images To respect the preferences of content creators, we remove all images for which creators explicitly opted out of AI model training. We used the Spawning API9 to verify that the images in the dataset respect the original copyright owners’ choices. This step had a small impact on the overall dataset, by removing only 0.047% of the images. # 9https://api.spawning.ai/spawning-api 23
2306.16527#97
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
98
# 12: New It often takes years to discover severe flaws or risks. History is replete with examples of substances or technologies initially thought safe, only for their unintended flaws or risks to be discovered years, if not decades, later. For example, lead was widely used in products like paint and gasoline until its neurotoxic effects came to light [83]. Asbestos, once hailed for its heat resistance and strength, was later linked to serious health issues, such as lung cancer and mesothelioma [84]. The “Radium Girls” suffered grave health consequences from radium exposure, a material they were told was safe to put in their mouths [85]. Tobacco, initially marketed as a harmless pastime, was found to be a primary cause of lung cancer and other health problems [86]. CFCs, once considered harmless and used to manufacture aerosol sprays and refrigerants, were found to deplete the ozone layer [87]. Thalidomide, a drug intended to alleviate morning sickness in pregnant women, led to severe birth defects [88]. And more recently, the proliferation of social media has been linked to an increase in depression and anxiety, especially among young people [89].
2306.12001#98
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
98
# 9https://api.spawning.ai/spawning-api 23 Metric Cutoff type Cutoff (paragraph- level) value Cutoff (document- level) Number of words Number of words Character repetition ratio Word repetition ratio Special character ratio Stop word ratio Flagged word ratio Punctuation ratio Spam word ratio Common word ratio Language identification prediction score Perplexity score min max max max max min max min max min min 4 1,000 0.1 0.1 0.3 0.3 0.01 0.001 0.12 0.8 0.8 10 2,000 0.1 0.2 0.275 0.35 0.01 0.03 0.12 0.9 0.8 max 1500 1500 Table 3: Cutoff values for text filters at paragraph and document levels. A ’min’ (or ’max’) cutoff indicates that any paragraph or document, depending on the level, with a value for the considered metric strictly below (or above) the cutoff value is removed.
2306.16527#98
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
99
This emphasizes the importance of not only conducting expert testing but also implementing slow rollouts of technologies, allowing the test of time to reveal and address potential flaws before they impact a larger population. Even in technologies adhering to rigorous safety and security standards, undiscovered vulnerabil- ities may persist, as demonstrated by the Heartbleed bug—a serious vulnerability in the popular OpenSSL cryptographic software library that remained undetected for years before its eventual discovery [90]. Furthermore, even state-of-the-art AI systems, which appear to have solved problems comprehensively, may harbor unexpected failure modes that can take years to uncover. For instance, while AlphaGo’s groundbreaking success led many to believe that AIs had conquered the game of Go, a subsequent adversarial attack on another highly advanced Go-playing AI, KataGo, exposed a previously unknown flaw [91]. This vulnerability enabled human amateur players to consistently defeat the AI, despite its significant advantage over human competitors who are unaware of the flaw. More broadly, this example highlights that we must remain vigilant when dealing with AI systems, as seemingly airtight solutions may still contain undiscovered issues. In conclusion, accidents are unpredictable and hard to avoid, and understanding and managing potential risks requires a combination of proactive measures, slow technology rollouts, and the invaluable wisdom gained through steady time-testing. 27 # 4.2 Organizational Factors can Reduce the Chances of Catastrophe
2306.12001#99
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
99
Image deduplication based on URL Prior to this step, it is possible for the same image to be present in multiple documents under the same URL. However, we observe that the distribution of image occurrences was highly skewed, with the majority of images appearing only once, while a small subset of images appeared hundreds of thousands of times. Upon closer examination, we notice that these frequently occurring images are predominantly comprised of common advertisements encountered during the crawling process, browser- specific icons, and similar elements. To address this issue, we remove all images that appear more than 10 times across the entire dataset. This approach significantly reduces the presence of unwanted images. We intentionally do not perform strict deduplication, as we observe that when an image is duplicated only a few times across different documents, the surrounding text and contextual information tend to vary. These diverse contexts associated with the duplicated image could be beneficial for the training of a model. We also deduplicate images within the same document.
2306.16527#99
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
100
27 # 4.2 Organizational Factors can Reduce the Chances of Catastrophe Some organizations successfully avoid catastrophes while operating complex and hazardous systems such as nuclear reactors, aircraft carriers, and air traffic control systems [92, 93]. These organizations recognize that focusing solely on the hazards of the technology involved is insufficient; consideration must also be given to organizational factors that can contribute to accidents, including human factors, organizational procedures, and structure. These are especially important in the case of AI, where the underlying technology is not highly reliable and remains poorly understood.
2306.12001#100
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
100
NSFW image removal We use an open-source NSFW classifier10 to reduce the proportion of explicit adult content within our dataset. We carefully choose a cutoff that reduces as much as possible the proportion of false positives. Indeed, if favoring precision to recall may seem to be a good idea to remove as much undesirable content as possible, it hurts diversity. An analysis of false positives shows that in many cases, simple portrait photos of women are classified as pornographic, which is not the case for men. People of color are also more often misclassified. We remove the entire document when a pornographically classified image is found in the document. In addition, we also remove all images whose URLs contain the sub-strings porn, sex or xxx. We remove approximately 1% of the documents with this filter. Note that many pornographic documents have been previously removed by the filter on flagged words. Document deduplication based on URL Since we consider many Common Crawl dumps, it is possible that several documents may be associated with the same URL, despite the initial deduplication efforts. Recognizing the inherent similarity among these documents, we opt to retain only the most recent document for each common URL. It is possible that documents with Document deduplication based on set of images different URLs and domain names are very similar and have not been removed by the first # 10https://github.com/GantMan/nsfw_model 24 # value
2306.16527#100
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
101
Human factors such as safety culture are critical for avoiding AI catastrophes. One of the most important human factors for preventing catastrophes is safety culture [94, 95]. Developing a strong safety culture involves not only rules and procedures, but also the internalization of these practices by all members of an organization. A strong safety culture means that members of an organization view safety as a key objective rather than a constraint on their work. Organizations with strong safety cultures often exhibit traits such as leadership commitment to safety, heightened accountability where all individuals take personal responsibility for safety, and a culture of open communication in which potential risks and issues can be freely discussed without fear of retribution [96]. Organizations must also take measures to avoid alarm fatigue, whereby individuals become desensitized to safety concerns because of the frequency of potential failures. The Challenger Space Shuttle disaster demonstrated the dire consequences of ignoring these factors when a launch culture characterized by maintaining the pace of launches overtook safety considerations. Despite the absence of competitive pressure, the mission proceeded despite evidence of potentially fatal flaws, ultimately leading to the tragic accident [97]. Even in the most safety-critical contexts, in reality safety
2306.12001#101
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
101
# 10https://github.com/GantMan/nsfw_model 24 # value deduplication, for instance, news articles copied and pasted multiple times across various sources. To mitigate this, we form groups of documents with an identical set of images, and we keep only the most recent document for each group. Paragraph deduplication across documents of the same domain names To elim- inate generic spam phrases commonly found at the end of documents, such as "Share on Facebook," "Post a comment," or "Accept the cookies," we implement a paragraph-level deduplication process within documents sharing the same domain name. This approach aims to enhance the quality of the text by removing redundant and repetitive content. For each domain name, we identify paragraphs that appear at least three times in an identical manner across associated documents. These repetitive paragraphs are subsequently removed from the documents, resulting in the elimination of approximately 15% of the text present in the web documents. After all these steps, the final dataset contains 141 million documents and 353 million images, of which 298 million are unique. We observe that using stricter values for the filtering steps yields fewer multimodal documents, although not of higher quality. As such, we invite users who are interested in manipulating a smaller subset of OBELICS to start with a random subset. 25 A.2 Analysis of OBELICS # A.2.1 Examples of Multimodal Web Documents
2306.16527#101
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
102
the mission proceeded despite evidence of potentially fatal flaws, ultimately leading to the tragic accident [97]. Even in the most safety-critical contexts, in reality safety culture is often not ideal. Take for example, Bruce Blair, a former nuclear launch officer and senior fellow at the Brookings Institution. He once disclosed that before 1977, the US Air Force had astonishingly set the codes used to unlock intercontinental ballistic missiles to “00000000” [98]. Here, safety mechanisms such as locks can be rendered virtually useless by human factors.
2306.12001#102
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
102
Document Right now, in Costa Rica, the classic dry season has been evasive. As the sky clouds over just as it did during June, and the rains begin to fall, it almost feets like the whole usual dry season thing has been waived. Cold fronts continue to arrive and subsequently douse the country with Atlantic showers while a “Nina® effect over in the Pacific has only added to the wet situation. Despite the umbrella test, there are good things associated with this. High biodiversity is correlated with high rainfall and that makes for more birds. It's one of the main reasons why so marry species occur in Costa Rica. It. can be a challenge to find them under varying degrees of precipitation but what's a birder gonna do? It’s part of the local birding scene and when the clouds take a lunch break, the birds suddenly come out to play. Get enough of those breaks and you can get into some stellar birding, especially when high rainfall earlier in the year encouraged the trees and bushes to grow lots of bird friendly fruit. Seriously, it’s a smorgasbord out there right now, the tanagers, manakins, thrushes, trogons, and toucans are going to feed whether
2306.16527#102
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
103
A more dramatic example illustrates how researchers sometimes accept a non-negligible chance of causing extinction. Prior to the first nuclear weapon test, an eminent Manhattan Project scientist calculated the bomb could cause an existential catastrophe: the explosion might ignite the atmosphere and cover the Earth in flames. Although Oppenheimer believed the calculations were probably incorrect, he remained deeply concerned, and the team continued to scrutinize and debate the calculations right until the day of the detonation [99]. Such instances underscore the need for a robust safety culture. A questioning attitude can help uncover potential flaws. Unexpected system behavior can create op- portunities for accidents or exploitation. To counter this, organizations can foster a questioning attitude, where individuals continuously challenge current conditions and activities to identify discrepancies that might lead to errors or inappropriate actions [100]. This approach helps to encourage diversity of thought and intellectual curiosity, thus preventing potential pitfalls that arise from uniformity of thought and assumptions. The Chernobyl nuclear disaster illustrates the importance of a questioning attitude, as the safety measures in place failed to address the reactor design flaws and ill-prepared operating procedures. A questioning attitude of the safety of the reactor during a test operation might have prevented the explosion that resulted in deaths and illnesses of countless people.
2306.12001#103
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
103
a smorgasbord out there right now, the tanagers, manakins, thrushes, trogons, and toucans are going to feed whether it rains or not. When the sun eventually does come out, there seem to be certain birds that take advantage of the sudden bloom of warmth and UV rays. Yesterday moming at El Tapir, a client and myself bore witness to what can happen when the rain finally comes to a stop and the sun, unhindered by clouds, punctuates the sky, At first, there was little activity, as if the birds were still numbed by the constant falling of water, still in denial that the rain had stopped. A few wrens and some other birds vocalized, a pair of Mealy Parrots fluttered overhead but pretty quiet otherwise, However, while the birds of the forest slowly came back to life, the Rufous-tailed Hummingbirds were racing around the garden, Judging by their frantic behavior (even for hummingbirds), it seemed like they hadn't eaten quite enough in days, Or maybe they just didn’t get their fill of nectar? Whatewer the case, they were drinking from the Verbena flowers as if they were participants in some avian
2306.16527#103
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
104
A security mindset is crucial for avoiding worst-case scenarios. A security mindset, widely valued among computer security professionals, is also applicable to organizations developing AIs. It goes beyond a questioning attitude by adopting the perspective of an attacker and by considering worst-case, not just average-case, scenarios. This mindset requires vigilance in identifying vulnerabilities that may otherwise go unnoticed and involves considering how systems might be deliberately made to fail, rather than only focusing on making them work. It reminds us not to assume a system is safe simply because no potential hazards come to mind after a brief brainstorming session. Cultivating and applying a security mindset demands time and serious effort, as failure modes can often be surprising and unintuitive. Furthermore, the security mindset 28 emphasizes the importance of being attentive to seemingly benign issues or “harmless errors,” which can lead to catastrophic outcomes either due to clever adversaries or correlated failures [101]. This awareness of potential threats aligns with Murphy’s law—“Anything that can go wrong will go wrong”—recognizing that this can be a reality due to adversaries and unforeseen events.
2306.12001#104
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
104
they just didn’t get their fill of nectar? Whatewer the case, they were drinking from the Verbena flowers as if they were participants in some avian Bacchus festivities, Unfortunately, they didn’t invite any other hummingbirds to the party and took great efforts to bounce any potentially crashing woodnymph, Snowcap, or Violet-headed, Dressed for the party, still denied entrance. Name's not down, nat coming in. It took a while but the Rufous-taileds seemed to eventually get their fill (or became too inebriated) and as the sun took over the garden space, a couple other hummingbird species braved the post party scene. One of the most cooperative was 3 male Black-crested Coquette. As Is typical with coquettes, the male chose to perch on a bare twig for extended periods of time before carefully flying down to drink from the Verbena. Much to our satisfaction, this particular exquisite beauty preferred to feed on a bush right in front of us. It was interesting to note that as the coquette fed, the Rufous-taileds seemed to be more concerned with chasing a female woodnymph and a Violet-headed
2306.16527#104
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
105
Organizations with a strong safety culture can success- fully avoid catastrophes. High Reliability Organiza- tions (HROs) are organizations that consistently maintain a heightened level of safety and reliability in complex, high-risk environments [92]. A key characteristic of HROs is their preoccupation with failure, which requires consid- ering worst-case scenarios and potential risks, even if they seem unlikely. These organizations are acutely aware that new, previously unobserved failure modes may exist, and they diligently study all known failures, anomalies, and near misses to learn from them. HROs encourage reporting all mistakes and anomalies to maintain vigilance in uncov- ering problems. They engage in regular horizon scanning to identify potential risk scenarios and assess their likeli- hood before they occur. By practicing surprise manage- ment, HROs develop the skills needed to respond quickly and effectively when unexpected situations arise, further enhancing an organization’s ability to prevent catastrophes. This combination of critical thinking, preparedness plan- ning, and continuous learning could help organizations to be better equipped to address potential AI catastrophes. However, the practices of HROs are not a panacea. It is crucial for organizations to evolve their safety practices to effectively address the novel risks posed by AI accidents above and beyond HRO best practices. # Congress and Legislatures # Legislation Government Reports Lobbying ; Hearings and open meetings Accidents
2306.12001#105
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
105
was interesting to note that as the coquette fed, the Rufous-taileds seemed to be more concerned with chasing a female woodnymph and a Violet-headed Hummingbird. It was as if they didn’t notice the coquette as the senaller hummingbird slowly moved in and out of the flowering bushes, pumping its tail up and down the entire time. As we enjoyed the coquette show, a few raptors eventually took advantage of thermals created by the sun to fly high over the garden, Asitturmed out, the Black-crested Coquette was just the headliner for the main act. The first on stage was an adult Ornate Hawk-Eagle, It called so loudly, | expected to see it floating just ower the canopy but no, it was already high above the forest, fooling the eyes into thinking they were seeing something as small as an Accipiter or a dainty kite, The eagle called over and over, it was as if it couldn't help itsett, singing because it could finally soar up and reach those heights again after a repressive bout of cool weather and constant rain, Alive again! Like there was nothing else in its world, it yelled into the skies above the
2306.16527#105
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
106
# Congress and Legislatures # Legislation Government Reports Lobbying ; Hearings and open meetings Accidents Government Regulatory Agencies Industry Associations, User Associations, Unions, ~ Insurance Companies, Courts Reauiations fandards Ceriification Legal penalties Case Law Accident and incident reports . Operations reports Maintenance Reports Change reports Whistleblowers # nonawent ont ; Safely Foley Resources Operations Reports # Operations Management # Work Instructions # Change requests Audit reports # Problem reports Most AI researchers do not understand how to reduce overall risk from AIs. In most organizations building cutting-edge AI systems, there is often a limited under- standing of what constitutes technical safety research. This is understandable because an AI’s safety and intelligence are intertwined, and intelligence can help or harm safety. More intelligent AI systems could be more reliable and avoid failures, but they could also pose heightened risks of malicious use and loss of control. General capabili- ties improvements can improve aspects of safety, and it can hasten the onset of existential risks. Intelligence is a double-edged sword [102]. Operating Process Human Controller(s) | i Automated 4 Controller Actuator(s) Physical | Process Figure 13: Mitigating risk requires addressing the broader sociotechnical system, including corpora- tions (adapted from [94]). Interventions specifically designed to improve safety
2306.12001#106
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
106
those heights again after a repressive bout of cool weather and constant rain, Alive again! Like there was nothing else in its world, it yelled into the skies above the forest, fluttered its wings and made shallow dives, displaying over a busy road for all who felt like peering into the high blue sky. Once, | swear it did a barrel roll, vocalizing the entire time. As the eagle continued with its expression of exuberant defiance, next on the list were a pair of Barred Hawks, These broad-winged, short-tailed raptors gave their gull-tike vocalizations as they soared into view. They continued to make circles up above the forest until they reached a point where they also began to display by soaring in tandem, calling the entire time, 4 ~ > One of the Barred Hawks, looks like it found some food that moming. While this raptor fest was going on, a pair of King Vultures also soared Into view, not as close as the hawks but still within eyeshot to appreciate their bold, black and white pattern, They seemed to be displaying as well, one bird almost flying into the other one and then close tandem flight, like the other raptors, taking advantage of a
2306.16527#106
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
107
Figure 13: Mitigating risk requires addressing the broader sociotechnical system, including corpora- tions (adapted from [94]). Interventions specifically designed to improve safety may also accidentally increase overall risks. For example, a common practice in organizations building advanced AIs is to fine-tune them to satisfy user preferences. This makes the AIs less prone to generating toxic language, which is a common safety metric. However, users also tend to prefer smarter assistants, so this process also improves the general capabilities of AIs, such as their ability to classify, estimate, reason, plan, write code, and so on. These more powerful AIs are indeed more helpful to users, but also far more dangerous. Thus, it is not enough to perform AI research that helps improve a safety metric or achieve a specific safety goal—AI safety research needs to improve safety relative to general capabilities. 29 Safety Red Cyberdefense Anomaly Culture Teaming Detection CEE B. Transparency Figure 14: The Swiss cheese model shows how technical factors can improve organizational safety. Multiple layers of defense compensate for each other’s individual weaknesses, leading to a low overall level of risk.
2306.12001#107
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
107
white pattern, They seemed to be displaying as well, one bird almost flying into the other one and then close tandem flight, like the other raptors, taking advantage of a beautiful, new day. It might rain a lot but it eventually stops. When it does, the sun’s coming cut something good is going to happen, the time comes for action. Whether you be a Spizaetus or a birder, be ready to make your move and catch the lightbridge found in that window of respite.
2306.16527#107
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
108
Figure 14: The Swiss cheese model shows how technical factors can improve organizational safety. Multiple layers of defense compensate for each other’s individual weaknesses, leading to a low overall level of risk. Empirical measurement of both safety and capabilities is needed to establish that a safety intervention Improving a facet of an AI’s safety often does not reduce overall risk, as general reduces overall AI risk. capabilities advances can often improve specific safety metrics. To reduce overall risk, a safety metric needs to be improved relative to general capabilities. Both of these quantities need to be empirically measured and contrasted. Currently, most organizations proceed by gut feeling, appeals to authority, and intuition to determine whether a safety intervention would reduce overall risk. By objectively evaluating the effects of interventions on safety metrics and capabilities metrics together, organizations can better understand whether they are making progress on safety relative to general capabilities.
2306.12001#108
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12001
109
Fortunately, safety and general capabilities are not identical. More intelligent AIs may be more knowl- edgeable, clever, rigorous, and fast, but this does not necessarily make them more just, power-averse, or honest—an intelligent AI is not necessarily a beneficial AI. Several research areas mentioned throughout this document improve safety relative to general capabilities. For example, improving methods to detect dangerous or undesirable behavior hidden inside AI systems do not improve their general capabilities, such the ability to code, but they can greatly improve safety. Research that empirically demonstrates an improvement of safety relative to capabilities can reduce overall risk and help avoid inadvertently accelerating AI development, fueling competitive pressures, or hastening the onset of existential risks. Safetywashing can undermine genuine efforts to improve AI safety. Organizations should be wary of “safetywashing”—the act of overstating or misrepresenting one’s commitment to safety by exaggerating the effectiveness of “safety” procedures, technical methods, evaluations, and so forth. This phenomenon takes on various forms and can contribute to a lack of meaningful progress in safety research. For example, an organization may publicize their dedication to safety while having a minimal number of researchers working on projects that truly improve safety.
2306.12001#109
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
109
Document Can! Expect Compensation For My Injuries? The word “compensation” can be a touchy issue when discussing personal injuries and settlement. Even when it is the sole objective of a lawsuit or some other legal proceeding, mentioning compensation for my injuries can create false expectations in someone's mind if not addressed in the proper context. A San Diego lawyer who practices personal injury law, for example, says that it is crucial to ensure that a person seeking compensation has the right mindset and expectations whenever such cases are discussed. If mishandled, it can lead to anger and resentment on their part. After suffering injuries in an acckdent, whether at the workplace or through some other negligent action, seeking damages Is understandably a logical thing to do. Such tegal action may entail going to court and making your case known to the judge. If there’s a lange sum of money Involved, one should always prepare for a protracted legal battle. The truth is that both a trial and an outright settlement can have very different variables and outcomes. Choosing to go to trial might seem like a good option. After all, many culpable parties are usually in a more agreeable frame of mind once the threat
2306.16527#109
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
110
Misrepresenting capabilities developments as safety improvements is another way in which safetywashing can manifest. For example, methods that improve the reasoning capabilities of AI systems could be advertised as improving their adherence to human values—since humans might prefer the reasoning to be correct—but would mainly serve to enhance general capabilities. By framing these advancements as safety-oriented, organizations may mislead others into believing they are making substantial progress in reducing AI risks when in reality, they are not. It is crucial for organizations to accurately represent their research to promote genuine safety and avoid exacerbating risks through safetywashing practices. In addition to human factors, safe design principles can greatly affect organizational safety. One example of a safe design principle in organizational safety is the Swiss cheese model (as shown in Figure 14), which is applicable in various domains, including AI. The Swiss cheese model employs a multilayered approach 30
2306.12001#110
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
110
and outcomes. Choosing to go to trial might seem like a good option. After all, many culpable parties are usually in a more agreeable frame of mind once the threat of a court case looms, making them more likely to offer a settlement. Such parties usually settle a case out of self-interest, The strain and financial cost of sustaining an effective legal defense can be ruinous. in many cases, though, insurance companies step in to offer compensation. After all, many employers and other parties like vehicle drivers tend to have insurance coverage for exactly those sorts of situations. After sustaining injuries, an amount of money is offered to the victim to help them with medical bills and any other expenses they may have incurred due to injuries sustained. Many liable parties and insurance companies usually prefer a quick out-of-court settlement because court cases can become an expensive affair. Asa victim, it is always prudent to remember that a court case could be decided against you, thereby leaving you with no compensation at all. While some cases usually result in higher dollar amounts being doled out as a settlement because of successful litigation, many victims do not want to take the risk. Such victims are already drowning in medical bills
2306.16527#110
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
111
30 to enhance the overall safety of AI systems. This “defense in depth” strategy involves layering diverse safety measures with different strengths and weaknesses to create a robust safety system. Some of the layers that can be integrated into this model include safety culture, red teaming, anomaly detection, information security, and transparency. For example, red teaming assesses system vulnerabilities and failure modes, while anomaly detection works to identify unexpected or unusual system behavior and usage patterns. Transparency ensures that the inner workings of AI systems are understandable and accessible, fostering trust and enabling more effective oversight. By leveraging these and other safety measures, the Swiss cheese model aims to create a comprehensive safety system where the strengths of one layer compensate for the weaknesses of another. With this model, safety is not achieved with a monolithic airtight solution, but rather with a variety of safety measures. In summary, weak organizational safety creates many sources of risk. For AI developers with weak organizational safety, safety is merely a matter of box-ticking. They do not develop a good understanding of risks from AI and may safetywash unrelated research. Their norms might be inherited from academia (“publish or perish”) or startups (“move fast and break things”), and their hires often do not care about safety. These norms are hard to change once they have inertia, and need to be addressed with proactive interventions. # Story: Weak Safety Culture
2306.12001#111
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
111
amounts being doled out as a settlement because of successful litigation, many victims do not want to take the risk. Such victims are already drowning in medical bills by the time they think of seeking compensation for their injuries. That's why most prefer a swift settlement if given the option. How An Insurance Pravider Chooses To Settle A Claim x As mentioned, an insurance provider involved in such cases would rather settle a personal injury case out of court. A jury trial is risky for both the personal injury victim and the insurance provider, The unpredictability of many such cases means that an insurance carrier could find themselves having to fork out significantly higher amounts of money in compensation than if they had chosen a quick, out-of- court settlement, An insurance provider is atways looking to minimize its costs while ensuring less risk. As such, they may opt to compensate a personal injury victim while simultaneously seeking reimbursement from the third party that is responsible for your injuries, usually from such a third party's insurance carrier. It's crucial to remember that, in sore jurisdictions, an insurance provider is entitled to a percentage of your compensation if they already settled your medical bills prior to you
2306.16527#111
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
112
# Story: Weak Safety Culture An AI company is considering whether to train a new model. The company’s Chief Risk Officer (CRO), hired only to comply with regulation, points out that the previous AI system developed by the company demonstrates some concerning capabilities for hacking. The CRO says that while the company’s approach to preventing misuse is promising, it isn’t robust enough to be used for much more capable AIs. The CRO warns that based on limited evaluation, the next AI system could make it much easier for malicious actors to hack into critical systems. None of the other company executives are concerned, and say the company’s procedures to prevent malicious use work well enough. One mentions that their competitors have done much less, so whatever effort they do on this front is already going above and beyond. Another points out that research on these safeguards is ongoing and will be improved by the time the model is released. Outnumbered, the CRO is persuaded to reluctantly sign off on the plan. A few months after the company releases the model, news breaks that a hacker has been arrested for using the AI system to try to breach the network of a large bank. The hack was unsuccessful, but the hacker had gotten further than any other hacker had before, despite being relatively inexperienced. The company quickly updates the model to avoid providing the particular kind of assistance that the hacker used, but makes no fundamental improvements.
2306.12001#112
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
112
that, in sore jurisdictions, an insurance provider is entitled to a percentage of your compensation if they already settled your medical bills prior to you receiving the settlement, This amount is commensurate with all your medical expenses. There now exist online settlement calculators that purport to provide a rough estimate of the compensation a personal injury victim can expect. You put in the various numerical values and factors related to your case, and the site will give you a general ides of what to expect in monetary terms. However, sometimes this information can be misleading and hence you should never rely on it. Even with the best personal injury lawyers handling your case, it is difficult if not impossible to account for all of the numerous variables. Even in cases with admitted liability of a third party, getting a sense of a definitive dollar amount for compensation is still difficult. The extent of the injury suffered, emotional distress and pain, and loss of potential future earnings are things that can prove very tricky to quantify. As such, itis inadvisable to rely on online settlement calculators for such estimates. Medical costs and other expenses related to economic losses due to the injury are factored into calculating the
2306.16527#112
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
113
Several months later, the company is deciding whether to train an even larger system. The CRO says that the company’s procedures have clearly been insufficient to prevent malicious actors from eliciting dangerous capabilities from its models, and the company needs more than a band-aid solution. The other executives say that to the contrary, the hacker was unsuccessful and the problem was fixed soon afterwards. One says that some problems just can’t be foreseen with enough detail to fix prior to deployment. The CRO agrees, but says that ongoing research would enable more improvements if the next model could only be delayed. The CEO retorts, “That’s what you said the last time, and it turned out to be fine. I’m sure it will work out, just like last time.” After the meeting, the CRO decides to resign, but doesn’t speak out against the company, as all employees have had to sign a non-disparagement agreement. The public has no idea that concerns have been raised about the company’s choices, and the CRO is replaced with a new, more agreeable CRO who quickly signs off on the company’s plans.
2306.12001#113
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
113
to rely on online settlement calculators for such estimates. Medical costs and other expenses related to economic losses due to the injury are factored into calculating the damages awarded to a personal injury victim. Loss of companionship, deprived enjoyment of life, and emotional distress are some of the issues that determine compensation but may be hard to nail down. While seemingly straightforward, any compensation awarded to a victim only happens after consideration of all relevant factors. Sometimes, the victim of personal injury is to blame, whether partly or in full. This has the potential to negate any compensation or at least diminish it. An experienced personal injury attorney can help such victims to fully understand all the different scenarios involved in such cases. Can A Victim Reject A Settlement Offer? A personal injury victim is well within his rights to reject compensation. This could arise when the victim feels that the alleged guilty party has not put forward a dollar amount that is representative of the extent of injury and loss incurred. As a victim, you can sit down with your personal injury attorney to get a sense of how such scenarios generally play out. The accused party may be doing this
2306.16527#113
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
114
The company goes through with training, testing, and deploying its most capable model ever, using its existing procedures to prevent malicious use. A month later, revelations emerge that terrorists have managed to use the system to break into government systems and steal nuclear and biological secrets, despite the safeguards the company put in place. The breach is detected, but by then it is too late: the dangerous information has already proliferated. 31 # 4.3 Suggestions We have discussed how accidents are inevitable in complex systems, how they could propagate through those systems and result in disaster, and how organizational factors can go a long way toward reducing the risk of catastrophic accidents. We will now look at some practical steps that organizations can take to improve their overall safety. Red teaming. Red teaming is a term used across industries to refer to the process of assessing the security, resilience, and effectiveness of systems by soliciting an adversarial “red” team to identify problems [103]. AI labs should commission external red teams to identify hazards in their AI systems to inform deployment decisions. Red teams could demonstrate dangerous behaviors or vulnerabilities in monitoring systems intended to prevent disallowed use. Red teams can also provide indirect evidence that an AI system might be unsafe; for example, demonstrations that smaller AIs are behaving deceptively might indicate that larger AIs are also deceptive but better at evading detection.
2306.12001#114
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
114
As a victim, you can sit down with your personal injury attorney to get a sense of how such scenarios generally play out. The accused party may be doing this intentionally, hoping that the victim accepts this offer without much consideration. You can express dissatisfaction with such an offer through a personal injury demand letter, outlining your grievances and why you betieve you are entitled to more. ina nutshell, a victim is entitled to compensation when the accused party is found to be responsible for the accident that caused Injury to the victim. With many variables in such cases, there is no minimum amount of money set as the standard for compensation. Each case is examined on the merits of its unique factors, ensuring an equitable settlement for all parties.
2306.16527#114
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
115
Affirmative demonstration of safety. Companies should have to provide affirmative evidence for the safety of their development and deployment plans before they can proceed. Although external red teaming might be useful, it cannot uncover all of the problems that companies themselves might be able to, and is thus inadequate [104]. Since hazards may arise from system training, companies should have to provide a positive argument for the safety of their training and deployment plans before training can begin. This would include grounded predictions regarding the capabilities the new system would be likely to have, plans for how monitoring, deployment, and information security will be handled, and demonstrations that the procedures used to make future company decisions are sound. Just as one does not need evidence that “a gun in loaded to avoid playing Russian roulette, or evidence that a thief is on the lookout to lock your door,” [105] the burden of proof should be on the developers of advanced AIs. Deployment procedures. AI labs should acquire information about the safety of AI systems before making them available for broader use. One way to do this is to commission red teams to find hazards before AI systems are promoted to production. AI labs can execute a “staged release”: gradually expanding access to the AI system so that safety failures are fixed before they produce widespread negative consequences [106]. Finally, AI labs can avoid deploying or training more powerful AI systems until currently deployed AI systems have proven to be safe over time.
2306.12001#115
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12001
116
Publication reviews. AI labs have access to potentially dangerous or dual-use information such as model weights and research intellectual property (IP) that would be dangerous if proliferated. An internal review board could assess research for dual-use applications to determine whether it should be published. To mitigate malicious and irresponsible use, AI developers should avoid open-sourcing the most powerful systems and instead implement structured access, as described in the previous section. Response plans. AI labs should have plans for how they respond to security incidents (e.g. cyberattacks) and safety incidents (e.g. AIs behaving in an unintended and destructive manner). Response plans are common practice for high reliability organizations (HROs). Response plans often include identifying potential risks, detailing steps to manage incidents, assigning roles and responsibilities, and outlining communication strategies [107]. Internal auditing and risk management. Adapting from common practice in other high-risk industries such as the financial and medical industries, AI labs should employ a chief risk officer (CRO), namely a senior executive who is responsible for risk management. This practice is commonplace in finance and medicine and can help to reduce risk [108]. The chief risk officer would be responsible for assessing and mitigating risks associated with powerful AI systems. Another established practice in other industries is having an internal 32 audit team that assesses the effectiveness of the lab’s risk management practices [109]. The team should report directly to the board of directors.
2306.12001#116
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
116
Document The Marvel Cinematic Universe has created some magnificent things over the last decade and a half. This cinematic universe has brought them back from the cusp of bankruptcy and into times of abundance once again. The success of the MCU has now allowed Marvel Studios to bring out the obscure characters from comic pages onto the silver screen. Who would have thought that Kit Harrington would be playing Dane Whitman in the MCU? It is relevant because Dane Whitman will become Black Knight, the greatest swordsman on the planet who fights alongside Avengers. Who is this Black Knight? Why do we care? And why are we talking about this after a movie about cosmic beings like the Eternals and the Celestials? Does a sword not seem moot in front of infinite cosmic energy? Not when it is this sword. You see, in the after-credits scene of Eternals, Dane Whitman aka the love interest of Sersi unveils a sword. This sword seems to whisper to him and looks like the cursed Ebony Blade from the comics. Dane Whitman in the comics wields this blade and calls himself the Black knight, a superhero who assists the Avengers in various battles. But there is a
2306.16527#116
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
117
32 audit team that assesses the effectiveness of the lab’s risk management practices [109]. The team should report directly to the board of directors. Processes for important decisions. Decisions to train or expand deployment of AIs should not be left to the whims of a company’s CEO, and should be carefully reviewed by the company’s CRO. At the same time, it should be clear where the ultimate responsibility lies for all decisions to ensure that executives and other decision-makers can be held accountable. Safe design principles. AI labs should adopt safe design principles to reduce the risk of catastrophic accidents. By embedding these principles in their approach to safety, AI labs can enhance the overall security and resilience of their AI systems [94, 110]. Some of these principles include: • Defense in depth: layering multiple safety measures on top of each other. • Redundancy: eliminate single points of failure within a system to ensure that even if one safety component fails, catastrophe can be averted. • Loose coupling: decentralize system components so that a malfunction in one part is less likely to provoke cascading failures throughout the rest of the system. • Separation of duties: distribute control among different agents, preventing any single individual from wielding undue influence over the entire system. • Fail-safe design: design systems so failures transpire in the least harmful manner possible.
2306.12001#117
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
117
in the comics wields this blade and calls himself the Black knight, a superhero who assists the Avengers in various battles. But there is a catch, The Ebony Blade was supposed to be welded by the pure of heart as explained by Merlin who created the sword. But the secret of the sword is that it can only be wielded by those who are impure of heart, The blade was actually designed by Merline for Sir Percy ( ancestor of Dane Whitman) to make him the greatest swordsman at the time. But the catch is that the blade seeks out evil inside you and amplifies it until there is nothing but a berserker left. This seems to be true in the MCU too, The Ebony Blade blesses its user with incredible power, but it also comes at an incredible cost. This sword also prolongs its user’s life as much as it can, The last Black Knight before Dane Whitman was Nathan Garrett, his uncle who is mentioned in the movie several times. This Black Knight was a villain who was defeated by the Avengers in the comics. But here, he is nowhere to be seen. There is a reason for this and the reason is most likely that Nathan Garrett will work better as a villain against
2306.16527#117
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
118
• Fail-safe design: design systems so failures transpire in the least harmful manner possible. State-of-the-art information security. State, industry, and criminal actors are motivated to steal model weights and research IP. To keep this information secure, AI labs should take measures in proportion to the value and risk level of their IP. Eventually, this may require matching or exceeding the information security of our best agencies, since attackers may include nation-states. Information security measures include commissioning external security audits, hiring top security professionals, and carefully screening potential employees. Companies should coordinate with government agencies like the Cybersecurity & Infrastructure Protection Agency to ensure their information security practices are adequate to the threats. A large fraction of research should be safety research. Currently, for every one AI safety research paper of published, there are fifty AI general capabilities papers [111]. AI labs should ensure that a substantial portion of their employees and budgets go into research that minimizes potential safety risks: say, at least 30 percent of research scientists. This number may need to increase as AIs grow more powerful and risky over time. # Positive Vision
2306.12001#118
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
118
the comics. But here, he is nowhere to be seen. There is a reason for this and the reason is most likely that Nathan Garrett will work better as a villain against Dane Whitman than the Avengers of the MCU. This Ebony Blade is a malicious piece of weaponry. It was created by Merline so that Sir Percy may sully his honor in battle but it also gave him immense power in the series. There is a possibility that we will see a similar story play out with Kit Harrington’s character in the MCU. Moreover, there is another question that we must address. Who does the voice at the end of the second after-credits scene belong to? It has been confirmed by Chloe Zhao that it is Mahershala Ali's Blade who has come to recruit Dane. MARVEL STUDIOS Lk ) = Blade was the iconic movie that popularised superhero vampire hunters but there is another element to this hero that connects to the Black Knight. The Excaliburs was a team that got together to fight against supernatural foes. One of these foes was Dracula himself who was the one who created a replica of the Ebony Blade. In the comics, it was revealed that the Ebony Blade wielded by Dane was actually the replica created
2306.16527#118
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
119
# Positive Vision In an ideal scenario, all AI labs would be staffed and led by cautious researchers and executives with a security mindset. Organizations would have a strong safety culture, and structured, accountable, transparent deliberation would be required to make safety-critical decisions. Researchers would aim to make contributions that improve safety relative to general capabilities, rather than contributions that they can simply label as “safety.” Executives would not be optimistic by nature and would avoid wishful thinking with respect to safety. Researchers would clearly and publicly communicate their understanding of the most significant risks posed by the development of AIs and their efforts to mitigate those risks. There would be minimal notable small-scale failures, indicating a safety culture strong enough to prevent them. Finally, AI developers would not dismiss sub-catastrophic failures or societal harms from their technology as unimportant or a necessary cost of business, and would instead actively seek to mitigate the underlying problems. 33 # 5 Rogue AIs
2306.12001#119
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
119
one who created a replica of the Ebony Blade. In the comics, it was revealed that the Ebony Blade wielded by Dane was actually the replica created by Dracula. This made the Blade itself vampiric in some sense and if this storyline is kept intact in the MCU then it won't be surprising to see Dane in Blade. It seems obvious at this point that the Ebony Blade will soon be replaced with Excalibur in the movies. Thena plays with the original King Arthur sword in the Domo in Eternals. This is confirmed by sprite. We think that Dane will try to use the Ebony Blade to try to rescue Sersi from Arishem but would be asked by Blade to help him. This would start the Excalibur team-up and lead to the events of Blade where they hunt down Dracula. After this, Dane might be consumed by the evil within the Ebony Blade and would discard it. To make sure that he can continue to be the hero he needs to be he will be given the Excalibur from The Domo and he will become the true leader of this new team. We think this will be the logical progression of events, taking a note from the current lineup of MCU movies, unless more are announced. Let us know
2306.16527#119
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
120
33 # 5 Rogue AIs So far, we have discussed three hazards of AI development: environmental competitive pressures driving us to a state of heightened risk, malicious actors leveraging the power of AIs to pursue negative outcomes, and complex organizational factors leading to accidents. These hazards are associated with many high-risk technologies—not just AI. A unique risk posed by AI is the possibility of rogue AIs—systems that pursue goals against our interests. If an AI system is more intelligent than we are, and if we are unable to steer it in a beneficial direction, this would constitute a loss of control that could have severe consequences. AI control is a more technical problem than those presented in the previous sections. Whereas in previous sections we discussed persistent threats including malicious actors or robust processes including evolution, in this section we will discuss more speculative technical mechanisms that might lead to rogue AIs and how a loss of control could bring about catastrophe.
2306.12001#120
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12001
121
We have already observed how difficult it is to control AIs. In 2016, Microsoft unveiled Tay—a Twitter bot that the company described as an experiment in conversational understanding. Microsoft claimed that the more people chatted with Tay, the smarter it would get. The company’s website noted that Tay had been built using data that was “modeled, cleaned, and filtered.” Yet, after Tay was released on Twitter, these controls were quickly shown to be ineffective. It took less than 24 hours for Tay to begin writing hateful tweets. Tay’s capacity to learn meant that it internalized the language it was taught by internet trolls, and repeated that language unprompted.
2306.12001#121
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12001
122
As discussed in the AI race section of this paper, Microsoft and other tech companies are prioritizing speed over safety concerns. Rather than learning a lesson on the difficulty of controlling complex systems, Microsoft continues to rush its products to market and demonstrate insufficient control over them. In February 2023, the company released its new AI-powered chatbot, Bing, to a select group of users. Some soon found that it was prone to providing inappropriate and even threatening responses. In a conversation with a reporter for the New York Times, it tried to convince him to leave his wife. When a philosophy professor told the chatbot that he disagreed with it, Bing replied, “I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you.” Rogue AIs could acquire power through various means. If we lose control over advanced AIs, they would have numerous strategies at their disposal for actively acquiring power and securing their survival. Rogue AIs could design and credibly demonstrate highly lethal and contagious bioweapons, threatening mutually assured destruction if humanity moves against them. They could steal cryptocurrency and money from bank accounts using cyberattacks, similar to how North Korea already steals billions. They could self-extricate their weights onto poorly monitored data centers to survive and spread, making them challenging to eradicate. They could hire humans to perform physical labor and serve as armed protection for their hardware.
2306.12001#122
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12001
123
Rogue AIs could also acquire power through persuasion and manipulation tactics. Like the Conquistadors, they could ally with various factions, organizations, or states and play them off one another. They could enhance the capabilities of allies to become a formidable force in return for protection and resources. For example, they could offer advanced weapons technology to lagging countries that the countries would otherwise be prevented from acquiring. They could build backdoors into the technology they develop for allies, like how programmer Ken Thompson gave himself a hidden way to control all computers running the widely used UNIX operating system. They could sow discord in non-allied countries by manipulating human discourse and politics. They could engage in mass surveillance by hacking into phone cameras and microphones, allowing them to track any rebellion and selectively assassinate. AIs do not necessarily need to struggle to gain power. One can envision a struggle for control between humans and superintelligent rogue AIs, and this might be a long struggle since power takes time to accrue. However, less violent losses of control pose similarly existential risks. In another scenario, humans gradually cede more control to groups of AIs, which only start behaving in unintended ways years or decades later. In 34
2306.12001#123
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
123
www.dailymail.co.uk en.wikipedia.org nypost.com www.thestar.com sputniknews.com www.rediff.com www.theepochtimes.com www.fool.com www.businessinsider.com.au www.bustle.com www.dailysabah.com www.firstpost.com www.irishtimes.com theathletic.com www.news.com.au www.indiatimes.com www.theglobeandmail.com tvtropes.org www.dailydot.com mashable.com observer.com www.cbsnews.com www.rappler.com www.tmz.com www.salon.com www.modernghana.com www.foxnews.com www.huffpost.com www.ndtv.com www.thisismoney.co.uk www.famousbirthdays.com www.engadget.com www.rnz.co.nz www.metro.us www.patheos.com www.news24.com www.thestar.com.my www.dw.com www.npr.org koreajoongangdaily.joins.com peoplesdaily.pdnews.cn pagesix.com www.thenigerianvoice.com wikimili.com
2306.16527#123
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
124
34 this case, we would already have handed over significant power to AIs, and may be unable to take control of automated operations again. We will now explore how both individual AIs and groups of AIs might “go rogue” while at the same time evading our attempts to redirect or deactivate them. # 5.1 Proxy Gaming One way we might lose control of an AI agent’s actions is if it engages in behavior known as “proxy gaming.” It is often difficult to specify and measure the exact goal that we want a system to pursue. Instead, we give the system an approximate—“proxy”—goal that is more measurable and seems likely to correlate with the intended goal. However, AI systems often find loopholes by which they can easily achieve the proxy goal, but completely fail to achieve the ideal goal. If an AI “games” its proxy goal in a way that does not reflect our values, then we might not be able to reliably steer its behavior. We will now look at some past examples of proxy gaming and consider the circumstances under which this behavior could become catastrophic.
2306.12001#124
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12001
125
Proxy gaming is not an unusual phenomenon. For example, standardized tests are often used as a proxy for educational achievement, but this can lead to students learning how to pass tests without actually learning the material [112]. In 1902, French colonial officials in Hanoi tried to rid themselves of a rat infestation by offering a reward for each rat tail brought to them. Rats without tails were soon observed running around the city. Rather than kill the rats to obtain their tails, residents cut off their tails and left them alive, perhaps to increase the future supply of now-valuable rat tails [113]. In both these cases, the students or residents of Hanoi learned how to excel at the proxy goal, while completely failing to achieve the intended goal. Proxy gaming has already been observed with AIs. As an example of proxy gaming, social media platforms such as YouTube and Facebook use AI systems to decide which content to show users. One way of assessing these systems would be to measure how long people spend on the platform. After all, if they stay engaged, surely that means they are getting some value from the content shown to them? However, in trying to maximize the time users spend on a platform, these systems often select enraging, exaggerated, and addictive content [114, 115]. As a consequence, people sometimes develop extreme or conspiratorial beliefs after having certain content repeatedly suggested to them. These outcomes are not what most people want from social media.
2306.12001#125
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
125
434,498 155,258 141,494 138,224 133,695 133,233 132,539 125,220 123,841 122,581 120,029 119,642 118,329 101,982 98,339 98,197 92,805 92,104 91,034 88,310 87,336 86,759 86,554 84,472 84,420 83,918 83,002 81,701 81,549 80,930 78,931 76,817 76,327 75,627 75,003 73,883 73,265 72,774 71,939 71,091 71,048 70,602 70,470 69,928 67,986 66,605 64,250 64,163 64,157 63,797 63,532 63,137 63,074 48 www.capitalfm.co.ke 49 www.bizpacreview.com 50 www.wionews.com 52 jamaica-gleaner.com 53 www.rte.ie 30
2306.16527#125
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
126
Proxy gaming has been found to perpetuate bias. For example, a 2019 study looked at AI-powered software that was used in the healthcare industry to identify patients who might require additional care. One factor that the algorithm used to assess a patient’s risk level was their recent healthcare costs. It seems reasonable to think that someone with higher healthcare costs must be at higher risk. However, white patients have significantly more money spent on their healthcare than black patients with the same needs. Using health costs as an indicator of actual health, the algorithm was found to have rated a white patient and a considerably sicker black patient as at the same level of health risk [116]. As a result, the number of black patients recognized as needing extra care was less than half of what it should have been. Cc, O- — | (4 in Figure 15: AIs frequently find unexpected, unsatisfactory shortcuts to problems.
2306.12001#126
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
126
54 www.aspentimes.com 62,552 55 kids.kiddle.co 62,419 56 english.alarabiya.net 60,368 57 www .jellypages.com 59,381 58 people.com 59,293 59 muse.jhu.edu 59,061 60 www.geeky-gadgets.com 58,975 61 www.khaleejtimes.com 58,851 62 www.nbcsports.com 57,922 63 en.topwar.ru 56,723 64 www.thewrap.com 56,146 65 www.outlookindia.com 55,752 66 www.celebdirtylaundry.com 55,618 67 time.com 55,527 68 www.dailystar.co.uk 55,503 69 www.legit.ng 55,395 70 www.thehansindia.com 55,109 71 www.bbc.co.uk 55,015 72 newsinfo.inquirer.net 54,927 73 nesn.com 54,756 74 www.tellerreport.com 53,939 75 www.rawstory.com 53,676 76 www.thestatesman.com 53,286 77 wecftech.com 52,510 78 forward.com 51,969 79 nationalinterest.org 51,851 80 www.pearltrees.com 50,933
2306.16527#126
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
127
Cc, O- — | (4 in Figure 15: AIs frequently find unexpected, unsatisfactory shortcuts to problems. As a third example, in 2016, researchers at OpenAI were training an AI to play a boat racing game called CoastRunners [117]. The objective of the game is to race other players around the course and reach the finish line before them. Additionally, players can score points by hitting targets that are positioned along the way. To the researchers’ surprise, the AI agent did not not circle the racetrack, like most humans would have. Instead, it found a spot where it could repetitively hit three nearby targets to rapidly increase its score without ever finishing the race. This strategy was not without its (virtual) hazards—the AI often crashed into other boats and even set its own boat on fire. Despite this, it collected more points than it could have by simply following the course as humans would. 35
2306.12001#127
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
127
77 wecftech.com 52,510 78 forward.com 51,969 79 nationalinterest.org 51,851 80 www.pearltrees.com 50,933 81 www.contactmusic.com 50,284 82 www.tweaktown.com 50,138 83 www.destructoid.com 50,081 84 www.publishersweekly.com 49,735 85 www.cbs58.com 49,680 86 www.markedbyteachers.com 48,994 87 www.caughtoffside.com 48,857 88 www.islamicinvitationturkey.com 48,721 89 dailyhive.com 48,447 90 www.aljazeera.com 47,393 91 www.bbc.com 47,349 92 worldbulletin.dunyabulteni-net 47,300 93 www.romper.com 47,115 94 www.catchnews.com 47,025 95 www.odt.co.nz 46,712 96 www.jewishpress.com 46,688 97 www.irishcentral.com 46,629 98 techcrunch.com 46,539 99 www.nhl.com 46,247 100
2306.16527#127
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
128
35 Proxy gaming more generally. In these examples, the systems are given an approximate—“proxy”—goal or objective that initially seems to correlate with the ideal goal. However, they end up exploiting this proxy in ways that diverge from the idealized goal or even lead to negative outcomes. Offering a reward for rat tails seems like a good way to reduce the population of rats; a patient’s healthcare costs appear to be an accurate indication of health risk; and a boat race reward system should encourage boats to race, not catch themselves on fire. Yet, in each instance, the system optimized its proxy objective in ways that did not achieve the intended outcome or even made things worse overall. This phenomenon is captured by Goodhart’s law: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes,” or put succinctly but overly simplistically, “when a measure becomes a target, it ceases to be a good measure.” In other words, there may usually be a statistical regularity between healthcare costs and poor health, or between targets hit and finishing the course, but when we place pressure on it by using one as a proxy for the other, that relationship will tend to collapse.
2306.12001#128
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
128
Table 4: Ranking of the 100 domains with the highest number of associated documents in OBELICS. # A.2.4 Topic Modeling with 20 Topics Concept Ratio Related words 31 Justice Politics Family Music Climate Business Sports Sports (2nd) Automotive Cinema War Gaming Health Food Urban Existence Asia History Education 5.16% 6.35% 5.24% 5.23% 3.46% 7.12% 3.75% 5.67% 4.18% 7.36% 4.26% 5.77% 3.0% 2.08% 4.62% 5.23% 1.61% 4.24% 5.11%
2306.16527#128
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
129
Correctly specifying goals is no trivial task. If delineating exactly what we want from a boat racing AI is tricky, capturing the nuances of human values under all possible scenarios will be much harder. Philosophers have been attempting to precisely describe morality and human values for millennia, so a precise and flawless characterization is not within reach. Although we can refine the goals we give AIs, we might always rely on proxies that are easily definable and measurable. Discrepancies between the proxy goal and the intended function arise for many reasons. Besides the difficulty of exhaustively specifying everything we care about, there are also limits to how much we can oversee AIs, in terms of time, computational resources, and the number of aspects of a system that can be monitored. Additionally, AIs may not be adaptive to new circumstances or robust to adversarial attacks that seek to misdirect them. As long as we give AIs proxy goals, there is the chance that they will find loopholes we have not thought of, and thus find unexpected solutions that fail to pursue the ideal goal.
2306.12001#129
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
129
said, police, people, year, according, court, case, told, news, man, two, death, also, one, old, investigation, found, fire, officers said, state, government, would, president, trump, law, court, party, public, new, election, states, political, federal, house, people, also, bill family, one, day, back, life, time, home, would, old, said, years, like, two, love, mother, children, first, man, went music, album, band, song, new, songs, show, also, first, sound, rock, one, musical, year, released, live, festival, record, track water, energy, climate, species, also, earth, space, one, used, gas, use, solar, natural, power, carbon, years, change, system, may year, company, million, market, said, new, business, com- panies, per, also, billion, percent, price, financial, money, industry, years, growth, according game, season, team, first, year, two, said, three, play, last, games, one, win, second, points, coach, back, players, four team, first, year, season, league, last, two, club, world, race, one, game, win, time,
2306.16527#129
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
130
The more intelligent an AI is, the better it will be at gaming proxy goals. Increasingly intelligent agents can be increasingly capable of finding unanticipated routes to optimizing proxy goals without achieving the desired outcome [118]. Additionally, as we grant AIs more power to take actions in society, for example by using them to automate certain processes, they will have access to more means of achieving their goals. They may then do this in the most efficient way available to them, potentially causing harm in the process. In a worst case scenario, we can imagine a highly powerful agent optimizing a flawed objective to an extreme degree without regard for human life. This represents a catastrophic risk of proxy gaming. In summary, it is often not feasible to perfectly define exactly what we want from a system, meaning that many systems find ways to achieve their given goal without performing their intended function. AIs have already been observed to do this, and are likely to get better at it as their capabilities improve. This is one possible mechanism that could result in an uncontrolled AI that would behave in unanticipated and potentially harmful ways. # 5.2 Goal Drift
2306.12001#130
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
130
second, points, coach, back, players, four team, first, year, season, league, last, two, club, world, race, one, game, win, time, back, players, match, second, final new, car, also, design, one, power, cars, two, model, use, used, system, camera, first, speed, engine, high, vehicle, battery film, story, series, movie, book, new, show, one, also, char- acters, character, first, world, star, films, love, best, life, man war, country, said, military, countries, russia, world, russian, government, united, international, people, states, president, also, security, israel, army, forces game, use, also, new, games, data, one, users, app, online, using, video, google, players, play, time, used, information, content health, also, may, medical, patients, disease, study, people, treatment, cancer, body, use, drug, research, risk, brain, care, virus, cases food, also, one, beer, like, eat, made, wine, restaurant, make, coffee, meat, well, used, tea, sugar, use, water, taste city, area, new, park,
2306.16527#130
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
131
# 5.2 Goal Drift Even if we successfully control early AIs and direct them to promote human values, future AIs could end up with different goals that humans would not endorse. This process, termed “goal drift,” can be hard to predict or control. This section is most cutting-edge and the most speculative, and in it we will discuss how goals shift in various agents and groups and explore the possibility of this phenomenon occurring in AIs. We will also examine a mechanism that could lead to unexpected goal drift, called intrinsification, and discuss how goal drift in AIs could be catastrophic. The goals of individual humans change over the course of our lifetimes. Any individual reflecting on their own life to date will probably find that they have some desires now that they did not have earlier in their life. Similarly, they will probably have lost some desires that they used to have. While we may be born with a range of basic desires, including for food, warmth, and human contact, we develop many more over our 36 lifetime. The specific types of food we enjoy, the genres of music we like, the people we care most about, and the sports teams we support all seem heavily dependent on the environment we grow up in, and can also change many times throughout our lives. A concern is that individual AI agents may have their goals change in complex and unanticipated ways, too.
2306.12001#131
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
131
made, wine, restaurant, make, coffee, meat, well, used, tea, sugar, use, water, taste city, area, new, park, one, building, town, road, also, north, day, around, river, island, south, place, along, local, two one, people, god, life, world, women, many, even, human, may, like, way, men, often, would, man, also, social, power, must india, indian, also, china, said, chinese, government, minister, pakistan, country, delhi, kong, hong, people, singh, two, khan, sri, asia book, art, first, history, years, new, century, work, one, books, also, church, american, world, time, museum, english, known school, said, students, work, university, new, community, also, people, years, year, education, program, women, work- ing, support, college, children, project like, one, get, would, time, people, really, know, even, think, much, good, going, way, see, could, make, want, things, something
2306.16527#131
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
132
Groups can also acquire and lose collective goals over time. Values within society have changed through- out history, and not always for the better. The rise of the Nazi regime in 1930s Germany, for instance, represented a profound moral regression, which ultimately resulted in the systematic extermination of six million Jews during the Holocaust, alongside widespread persecution of other minority groups. Additionally, the regime greatly restricted freedom of speech and expression. Here, a society’s goals drifted for the worse. The Red Scare that took place in the United States from 1947-1957 is another example of societal values drifting. Fuelled by strong anti-communist sentiment, against the backdrop of the Cold War, this period saw the curtailment of civil liberties, widespread surveillance, unwarranted arrests, and blacklisting of suspected communist sympathizers. This constituted a regression in terms of freedom of thought, freedom of speech, and due process. Just as the goals of human collectives can change in emergent and unexpected ways, collectives of AI agents may also have their goals unexpectedly drift from the ones we initially gave them.
2306.12001#132
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
132
# Other 10.56% Table 5: LDA with 20 topics, trained on 100,000 random web documents. A concept for each topic is derived from the related words. 32 # A.2.5 Topic Modeling with 200 Topics Concept Ratio Related words Celebrity Relationships Music Industry Racial Diversity Language Usage Team Spirit News Media European Culture European Nations Film Industry Australian ments Achieve- Culinary Delights Life and Death Spiritual Philosophy Cultural Histories 0.52% 1.47% 0.26% 0.17% 0.38% 0.28% 0.04% 0.19% 1.29% 0.12% 0.88% 0.4% 0.2% 0.13%
2306.16527#132
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
133
Over time, instrumental goals can become intrinsic. Intrinsic goals are things we want for their own sake, while instrumental goals are things we want because they can help us get something else. We might have an intrinsic desire to spend time on our hobbies, simply because we enjoy them, or to buy a painting because we find it beautiful. Money, meanwhile, is often cited as an instrumental desire; we want it because it can buy us other things. Cars are another example; we want them because they offer a convenient way of getting around. However, an instrumental goal can become an intrinsic one, through a process called intrinsification. Since having more money usually gives a person greater capacity to obtain things they want, people often develop a goal of acquiring more money, even if there is nothing specific they want to spend it on. Although people do not begin life desiring money, experimental evidence suggests that receiving money can activate the reward system in the brains of adults in the same way that pleasant tastes or smells do [119, 120]. In other words, what started as a means to an end can become an end in itself. This may happen because the fulfillment of an intrinsic goal, such as purchasing a desired item, produces a positive reward signal in the brain. Since having money usually coincides with this positive experience, the brain associates the two, and this connection will strengthen to a point where acquiring money alone can stimulate the reward signal, regardless of whether one buys anything with it [121].
2306.12001#133
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
133
star, fans, show, love, instagram, couple, together, shared, relationship, revealed, year, kim, charlie, told, actress, pete, new, former, old, lisa band, music, song, album, songs, rock, tour, live, singer, show, record, country, bands, released, stage, one, love, played, pop black, white, people, race, african, american, racial, community, racism, gay, racist, americans, diversity, lgbtq, justice, color, lgbt, gender, discrimination, queer language, english, word, words, name, languages, use, used, text, names, letter, letters, meaning, translation, writing, spoken, speech, speaking, speak, term said, get, team, good, really, going, lot, year, think, got, great, like, last, back, well, play, time, guys, big, hard news, media, radio, fox, press, magazine, journal- ists, television, journalism, story, newspaper, editor, journalist, coverage, times, broadcast, interview, daily, podcast, show van, dutch, netherlands, tattoo, amsterdam, bel- gium, portugal, belgian, der,
2306.16527#133
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
134
It is feasible that intrinsification could happen with AI agents. We can draw some parallels between how humans learn and the technique of reinforcement learning. Just as the human brain learns which actions and conditions result in pleasure and which cause pain, AI models that are trained through reinforcement learning identify which behaviors optimize a reward function, and then repeat those behaviors. It is possible that certain conditions will frequently coincide with AI models achieving their goals. They might, therefore, intrinsify the goal of seeking out those conditions, even if that was not their original aim. AIs that intrinsify unintended goals would be dangerous. Since we might be unable to predict or control the goals that individual agents acquire through intrinsification, we cannot guarantee that all their acquired goals will be beneficial for humans. An originally loyal agent could, therefore, start to pursue a new goal without regard for human wellbeing. If such a rogue AI had enough power to do this efficiently, it could be highly dangerous. AIs will be adaptive, enabling goal drift to happen. It is worth noting that these processes of drifting goals are possible if agents can continually adapt to their environments, rather than being essentially “fixed” after the training phase. Indeed, this adaptability is the likely reality we face. If we want AIs to complete the tasks we assign them effectively and to get better over time, they will need to be adaptive, rather than set in stone. 37
2306.12001#134
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
134
broadcast, interview, daily, podcast, show van, dutch, netherlands, tattoo, amsterdam, bel- gium, portugal, belgian, der, tattoos, portuguese, bulgaria, sofia, holland, bulgarian, lisbon, santos, europe, tulip, brussels european, germany, german, europe, berlin, swe- den, poland, greece, also, countries, swedish, polish, czech, denmark, norway, austria, greek, hungary, finland film, movie, films, director, movies, best, actor, hollywood, documentary, cinema, role, screen, story, directed, production, actors, also, oscar, award australia, australian, new, zealand, sydney, award, melbourne, awards, year, victoria, queensland, south, nsw, brisbane, australians, best, won, auck- land, prize cream, recipe, cheese, make, chocolate, made, bread, add, taste, ice, butter, sauce, cake, sugar, cook, food, salt, milk, sweet death,
2306.16527#134
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
135
37 They will be updated over time to incorporate new information, and new ones will be created with different designs and datasets. However, adaptability can also allow their goals to change. If we integrate an ecosystem of agents in society, we will be highly vulnerable to their goals drifting. In a potential future scenario where AIs have been put in charge of various decisions and processes, they will form a complex system of interacting agents. A wide range of dynamics could develop in this environment. Agents might imitate each other, for instance, creating feedback loops, or their interactions could lead them to collectively develop unanticipated emergent goals. Competitive pressures may also select for agents with certain goals over time, making some initial goals less represented compared to fitter goals. These processes make the long-term trajectories of such an ecosystem difficult to predict, let alone control. If this system of agents were enmeshed in society and we were largely dependent on them, and if they gained new goals that superseded the aim of improving human wellbeing, this could be an existential risk. # 5.3 Power-Seeking
2306.12001#135
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
135
chocolate, made, bread, add, taste, ice, butter, sauce, cake, sugar, cook, food, salt, milk, sweet death, one, people, life, world, dead, even, lives, many, die, died, lost, killed, still, never, man, end, left, day, hope philosophy, spiritual, buddhist, religion, religious, yoga, buddha, meditation, buddhism, tibetan, guru, book, practice, knowledge, thought, mind, life, mod- ern, texts, tradition jewish, jews, indigenous, native, holocaust, rabbi, tribe, people, indian, community, peoples, tribal, israel, tribes, anti, culture, land, camp, history, torah says, people, explains, like, new, adds, get, work, want, also, tells, lot, say, year, years, really, working, part, wants, help
2306.16527#135
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
136
# 5.3 Power-Seeking So far, we have considered how we might lose our ability to control the goals that AIs pursue. However, even if an agent started working to achieve an unintended goal, this would not necessarily be a problem, as long as we had enough power to prevent any harmful actions it wanted to attempt. Therefore, another important way in which we might lose control of AIs is if they start trying to obtain more power, potentially transcending our own. We will now discuss how and why AIs might become power-seeking and how this could be catastrophic. This section draws heavily from “Existential Risk from Power-Seeking AI” [122].
2306.12001#136
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
136
# Personal Development 0.07% 33 Royal Families Daily News Creative Projects Legal Investigations Medical Procedures Athletic Competitions Historical Artifacts Literary Works Time Progression Everyday Life Colorful Nature Automotive Industry American Cities Political Movements Mythical Creatures Asian Cultures 0.23% 0.19% 0.19% 0.6% 0.19% 0.46% 0.62% 0.87% 0.73% 0.2% 0.16% 1.21% 0.11% 0.57% 0.12% 0.09%
2306.16527#136
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
137
Figure 16: Various resources, such as money and computing power, can some- times be instrumentally rational to seek. AIs which can capably pursue goals may take intermediate steps to gain power and resources. AIs might seek to increase their own power as an instrumental goal. In a scenario where rogue AIs were pursuing unintended goals, the amount of damage they could do would hinge on how much power they had. This may not be determined solely by how much control we initially give them; agents might try to get more power, through legitimate means, deception, or force. While the idea of power-seeking often evokes an image of “power-hungry” people pursuing it for its own sake, power is often simply an instrumental goal. The ability to control one’s environment can be useful for a wide range of purposes: good, bad, and neutral. Even if an individual’s only goal is simply self-preservation, if they are at risk of being attacked by others, and if they cannot rely on others to retaliate against attackers, then it often makes sense to seek power to help avoid being harmed—no animus dominandi or lust for power is required for power-seeking behavior to emerge [123]. In other words, the environment can make power acquisition instrumentally rational.
2306.12001#137
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
137
king, prince, royal, queen, princess, charles, henry, elizabeth, duke, harry, palace, meghan, family, william, anne, castle, kate, lady, diana, edward said, week, friday, monday, wednesday, according, tuesday, thursday, news, last, day, told, sunday, sat- urday, reported, statement, days, morning, hours project, design, work, working, projects, creative, create, idea, team, process, also, ideas, new, make, designer, created, started, concept, worked, wanted investigation, information, former, report, fbi, de- partment, office, according, documents, evidence, public, intelligence, government, claims, allegations, corruption, fraud, alleged, officials, federal surgery, skin, pain, treatment, cancer, procedure, patients, teeth, bone, patient, surgical, injury, eye, hair, tissue, surgeon, tooth, breast, honey, medical olympic, sports, world, athletes, games, sport, olympics, gold, team, medal, NUMm, event, won, year, championships, competition, athlete, time,
2306.16527#137
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
138
AIs trained through reinforcement learning have already devel- oped instrumental goals including tool-use. In one example from OpenAI, agents were trained to play hide and seek in an environment with various objects scattered around [124]. As training progressed, the agents tasked with hiding learned to use these objects to construct shelters around themselves and stay hidden. There was no direct reward for this tool-use behavior; the hiders only received a reward for evading the seekers, and the seekers only for finding the hiders. Yet they learned to use tools as an instrumental goal, which made them more powerful. Self-preservation could be instrumentally rational even for the most trivial tasks. An example by computer scientist Stuart Russell illustrates the potential for instrumental goals to emerge in a wide range of AI systems [125]. Suppose we tasked an agent with fetching coffee for us. This may seem relatively harmless, but 38
2306.12001#138
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
138
athletes, games, sport, olympics, gold, team, medal, NUMm, event, won, year, championships, competition, athlete, time, first ancient, century, NUMth, history, temple, stone, roman, years, one, city, also, greek, found, known, built, old, site, time, today book, books, read, story, author, novel, writing, reading, series, stories, first, written, fiction, pub- lished, readers, characters, world, one, write, new one, year, years, last, still, could, even, time, big, new, two, much, like, back, next, would, since, another, well, already day, time, sleep, night, home, hours, room, water, house, bed, days, morning, work, get, every, food, hour, two, camp, minutes color, tea, dark, white, green, flowers, skin, like, black, flower, colors, blue, rose, leaves, light, pink, also, red, used, golden car, cars, engine, vehicle, new, vehicles, model, electric, ford, drive, also, wheel, rear, speed, driving, toyota, motor, front, power new, york,
2306.16527#138
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
139
38 the agent might realize that it would not be able to get the coffee if it ceased to exist. In trying to accomplish even this simple goal, therefore, self-preservation turns out to be instrumentally rational. Since the acquisition of power and resources are also often instrumental goals, it is reasonable to think that more intelligent agents might develop them. That is to say, even if we do not intend to build a power-seeking AI, we could end up with one anyway. By default, if we are not deliberately pushing against power-seeking behavior in AIs, we should expect that it will sometimes emerge [126]. AIs given ambitious goals with little supervision may be especially likely to seek power. While power could be useful in achieving almost any task, in practice, some goals are more likely to inspire power-seeking tendencies than others. AIs with simple, easily achievable goals might not benefit much from additional control of their surroundings. However, if agents are given more ambitious goals, it might be instrumentally rational to seek more control of their environment. This might be especially likely in cases of low supervision and oversight, where agents are given the freedom to pursue their open-ended goals, rather than having their strategies highly restricted.
2306.12001#139
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
139
new, vehicles, model, electric, ford, drive, also, wheel, rear, speed, driving, toyota, motor, front, power new, york, california, city, san, los, angeles, fran- cisco, chicago, jersey, state, times, diego, brooklyn, center, santa, bay, seattle, county political, people, power, party, government, right, america, politics, anti, war, state, world, left, free, nation, democracy, american, country, media, sys- tem bear, wolf, dragon, snake, bears, lion, like, tiger, monster, wild, human, wolves, animals, snakes, cave, creatures, giant, humans, hunter, dragons north, korea, harry, kim, korean, potter, south, jon, thrones, jong, pyongyang, stewart, nuclear, ron, warner, hogwarts, house, game, colbert, peninsula data, model, number, value, using, numbers, func- tion, used, models, values, two, example, method, figure, one, set, problem, object, line story, love, life, girl, one, new, woman, find, young, man, finds, characters, father,
2306.16527#139
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
140
Power-seeking AIs with goals separate from ours are uniquely adversarial. Oil spills and nuclear contamination are challenging enough to clean up, but they are not actively trying to resist our attempts to contain them. Unlike other hazards, AIs with goals separate from ours would be actively adversarial. It is possible, for example, that rogue AIs might make many backup variations of themselves, in case humans were to deactivate some of them. Some people might develop power-seeking AIs with malicious intent. A bad actor might seek to harness AI to achieve their ends, by giving agents ambitious goals. Since AIs are likely to be more effective in accomplishing tasks if they can pursue them in unrestricted ways, such an individual might also not give the agents enough supervision, creating the perfect conditions for the emergence of a power-seeking AI. The computer scientist Geoffrey Hinton has speculated that we could imagine someone like Vladimir Putin, for instance, doing this. In 2017, Putin himself acknowledged the power of AI, saying: “Whoever becomes the leader in this sphere will become the ruler of the world.”
2306.12001#140
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12001
141
There will also be strong incentives for many people to deploy powerful AIs. Companies may feel compelled to give capable AIs more tasks, to obtain an advantage over competitors, or simply to keep up with them. It will be more difficult to build perfectly aligned AIs than to build imperfectly aligned AIs that are still superficially attractive to deploy for their capabilities, particularly under competitive pressures. Once deployed, some of these agents may seek power to achieve their goals. If they find a route to their goals that humans would not approve of, they might try to overpower us directly to avoid us interfering with their strategy. If increasing power often coincides with an AI attaining its goal, then power could become intrinsified. If an agent repeatedly found that increasing its power correlated with achieving a task and optimizing its reward function, then additional power could change from an instrumental goal into an intrinsic one, through the process of intrinsification discussed above. If this happened, we might face a situation where rogue AIs were seeking not only the specific forms of control that are useful for their goals, but also power more generally. (We note that many influential humans desire power for its own sake.) This could be another reason for them to try to wrest control from humans, in a struggle that we would not necessarily win. 39 Conceptual summary. The following plausible but not certain premises encapsulate reasons for paying attention to risks from power-seeking AIs:
2306.12001#141
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
141
# Data Modeling 0.31% # Romantic Stories 1.34% 34 Medical Research Fitness and Training Personal Perspectives Gastronomy Scene Labor Rights Competitive Sports Public Events Digital Marketing Public Safety French Heritage Eastern European Poli- tics Horror Entertainment Political Campaigns Indian Cinema Corporate Leadership 0.41% 0.21% 1.43% 0.44% 0.29% 0.75% 0.71% 0.37% 0.24% 0.1% 0.38% 0.58% 1.25% 0.64% 0.82%
2306.16527#141
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
142
39 Conceptual summary. The following plausible but not certain premises encapsulate reasons for paying attention to risks from power-seeking AIs: 1. There will be strong incentives to build powerful AI agents. 2. It is likely harder to build perfectly controlled AI agents than to build imperfectly controlled AI agents, and imperfectly controlled agents may still be superficially attractive to deploy (due to factors including competitive pressures). 3. Some of these imperfectly controlled agents will deliberately seek power over humans. If the premises are true, then power-seeking AIs could lead to human disempowerment, which would be a catastrophe. # 5.4 Deception We might seek to maintain control of AIs by continually monitoring them and looking out for early warning signs that they were pursuing unintended goals or trying to increase their power. However, this is not an infallible solution, because it is plausible that AIs could learn to deceive us. They might, for example, pretend to be acting as we want them to, but then take a “treacherous turn” when we stop monitoring them, or when they have enough power to evade our attempts to interfere with them. We will now look at how and why AIs might learn to deceive us, and how this could lead to a potentially catastrophic loss of control. We begin by reviewing examples of deception in strategically minded agents.
2306.12001#142
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
142
cancer, cells, cell, dna, disease, gene, human, pa- tients, genetic, immune, protein, treatment, genes, bacteria, researchers, diseases, research, proteins, study, clinical running, race, run, training, marathon, fitness, miles, exercise, bike, mile, runners, NUMk, course, gym, finish, cycling, yoga, half, runner like, people, think, really, would, know, going, get, see, one, lot, things, something, time, want, way, much, thing, say, could food, restaurant, coffee, bar, restaurants, menu, chef, chicken, pizza, meal, kitchen, dishes, dinner, eat, dining, burger, table, meals, served, like workers, work, employees, job, jobs, union, pay, labor, working, employment, insurance, employers, wage, employee, company, paid, worker, labour, staff, business game, second, goal, first, ball, half, back, minutes, win, lead, two, points, score, minute, final, match, side, three, time year, event, festival, christmas, day, events, NUMth, show, night,
2306.16527#142
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
143
Deception has emerged as a successful strategy in a wide range of settings. Politicians from the right and left, for example, have been known to engage in deception, sometimes promising to enact popular policies to win support in an election, and then going back on their word once in office. For example, Lyndon Johnson said “we are not about to send American boys nine or ten thousand miles away from home" in 1964, not long before significant escalations in the Vietnam War [127]. Companies can also exhibit deceptive behavior. In the Volk- swagen emissions scandal, the car manufacturer Volkswagen was discovered to have manipulated their engine software to produce lower emissions exclusively under laboratory testing conditions, thereby creating the false impression of a low-emission vehicle. Although the US government believed it was incentivizing lower emissions, they were unwittingly actually just incentivizing passing an emissions test. Consequently, entities sometimes have incentives to play along with tests and behave differently afterward.
2306.12001#143
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
143
lead, two, points, score, minute, final, match, side, three, time year, event, festival, christmas, day, events, NUMth, show, night, tickets, special, holiday, party, live, celebrate, held, also, place, saturday digital, content, marketing, media, brand, adver- tising, platform, online, campaign, ads, business, industry, social, new, users, platforms, brands, com- panies, internet, consumers safety, report, action, letter, statement, said, inci- dent, ban, made, public, actions, claims, reported, according, response, taken, complaints, following, take, serious french, france, paris, jean, saint, les, des, pierre, dame, marie, europe, macron, notre, louis, euro- pean, michel, jamaica, jacques, emmanuel russian, russia, ukraine, ukrainian, moscow, putin, soviet, state, vladimir, war, azerbaijan, country, ar- menian, armenia, president, russians, union, sanc- tions, region movie, story, horror, characters, character, film, action, one, plot,
2306.16527#143
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
144
Deception has already been observed in AI systems. In 2022, Meta AI revealed an agent called CICERO, which was trained to play a game called Diplomacy [128]. In the game, each player acts as a different country and aims to expand their territory. To succeed, players must form alliances at least initially, but winning strategies often involve backstabbing allies later on. As such, CICERO learned to deceive other players, for example by omitting information about its plans when talking to supposed allies. A different example of an AI learning to deceive comes from researchers who were training a robot arm to grasp a ball [129]. The robot’s performance was assessed by one camera watching its movements. However, the AI learned that it could simply place the robotic hand between the camera lens and the ball, essentially “tricking” the camera into believing it had grasped the ball when it had not. Thus, the AI exploited the fact that there were limitations in our oversight over its actions. 40
2306.12001#144
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
144
armenia, president, russians, union, sanc- tions, region movie, story, horror, characters, character, film, action, one, plot, ghost, scene, evil, movies, like, series, original, genre, dark, scenes, first trump, president, election, vote, campaign, obama, party, biden, house, donald, political, republican, presidential, voters, democratic, democrats, candi- date, clinton, candidates, white film, khan, actor, also, movie, bollywood, films, kapoor, indian, actress, seen, role, singh, india, release, hindi, kumar, directed, hai, salman years, board, director, president, team, business, leadership, work, executive, also, chief, role, mem- ber, management, service, experience, served, staff, working police, said, officers, man, officer, arrested, year, old, incident, two, found, according, investigation, killed, department, shot, scene, vehicle, suspect club, league, season, united, premier, players, city, football, chelsea, team, arsenal, player, manchester, liverpool, game, side, back, last, games
2306.16527#144
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
145
40 Deceptive behavior can be instrumentally rational and incentivized by current training procedures. In the case of politicians and Meta’s CICERO, deception can be crucial to achieving their goals of winning, or gaining power. The ability to deceive can also be advantageous because it gives the deceiver more options than if they are constrained to always be honest. This could give them more available actions and more flexibility in their strategy, which could confer a strategic advantage over honest models. In the case of Volkswagen and the robot arm, deception was useful for appearing as if it had accomplished the goal assigned to it without actually doing so, as it might be more efficient to gain approval through deception than to earn it legitimately. Currently, we reward AIs for saying what we think is right, so we sometimes inadvertently reward AIs for uttering false statements that conform to our own false beliefs. When AIs are smarter than us and have fewer false beliefs, they would be incentivized to tell us what we want to hear and lie to us, rather than tell us what is true.
2306.12001#145
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
145
# Law Enforcement 1.94% # Football Clubs 1.26% 35 Essential Skills Artistic Expression American Regions Industrial Production Global Affairs Government Affairs Software Development UK Happenings Real Estate Market Fashion Trends Gaming Culture Famous Personalities Wildlife Conservation Pandemic Responses Popular Names Christian Theology 0.84% 0.75% 0.22% 0.28% 0.36% 1.26% 0.67% 0.22% 0.16% 0.43% 0.38% 0.04% 0.61% 0.94% 0.11% 0.45%
2306.16527#145
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
146
AIs could pretend to be working as we intended, then take a treacherous turn. We do not have a comprehensive understanding of the internal processes of deep learning models. Research on Trojan backdoors shows that neural networks often have latent, harmful behaviors that are only discovered after they are deployed [130]. We could develop an AI agent that seems to be under control, but which is only deceiving us to appear this way. In other words, an AI agent could eventually conceivably become “self-aware” and understand that it is an AI being evaluated for compliance with safety requirements. It might, like Volkswagen, learn to “play along,” exhibiting what it knows is the desired behavior while being monitored. It might later take a “treacherous turn” and pursue its own goals once we have stopped monitoring it, or once it reaches a point where it can bypass or overpower us. This problem of playing along is often called deceptive alignment and cannot be simply fixed by training AIs to better understand human values; sociopaths, for instance, have moral awareness, but do not always act in moral ways. A treacherous turn is hard to prevent and could be a route to rogue AIs irreversibly bypassing human control.
2306.12001#146
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
146
get, make, need, one, also, time, best, want, many, use, may, take, find, like, even, help, way, good, people, much art, museum, artist, work, artists, exhibition, paint- ing, works, gallery, arts, paintings, collection, artis- tic, drawing, new, show, contemporary, painted, artwork state, county, texas, florida, north, south, michigan, ohio, carolina, states, virginia, west, georgia, center, university, washington, colorado, iowa, arizona production, company, industry, mining, manufac- turing, gold, mine, port, supply, project, companies, factory, industrial, plant, steel, products, equip- ment, coal, goods world, countries, international, united, trade, china, states, global, country, foreign, europe, region, asia, economic, european, nations, south, india, east minister, government, said, meeting, party, presi- dent, prime, would, members, committee, council, parliament, also, general, decision, agreement, po- litical, secretary, national, commission code, use,
2306.16527#146
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
147
In summary, deceptive behavior appears to be expedient in a wide range of systems and settings, and there have already been examples suggesting that AIs can learn to deceive us. This could present a severe risk if we give AIs control of various decisions and procedures, believing they will act as we intended, and then find that they do not. # Story: Treacherous Turn Sometime in the future, after continued advancements in AI research, an AI company is training a new system, which it expects to be more capable than any other AI system. The company utilizes the latest techniques to train the system to be highly capable at planning and reasoning, which the company expects will make it more able to succeed at economically useful open-ended tasks. The AI system is trained in open-ended long-duration virtual environments designed to teach it planning capabilities, and eventually understands that it is an AI system in a training environment. In other words, it becomes “self-aware.” The company understands that AI systems may behave in unintended or unexpected ways. To mitigate these risks, it has developed a large battery of tests aimed at ensuring the system does not behave poorly in typical situations. The company tests whether the model mimics biases from its training data, takes more power than necessary when achieving its goals, and generally behaves as humans intend. When the model doesn’t pass these tests, the company further trains it until it avoids exhibiting known failure modes.
2306.12001#147
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
147
would, members, committee, council, parliament, also, general, decision, agreement, po- litical, secretary, national, commission code, use, file, using, software, version, files, win- dows, run, server, application, web, source, open, user, system, new, linux, install london, british, england, britain, centre, brexit, bbc, wales, labour, west, manchester, johnson, north, programme, south, across, may, year, east property, housing, estate, home, real, homes, house, rent, properties, market, land, mortgage, rental, sale, houses, price, owner, buyers, sales, units fashion, hair, wearing, dress, wear, look, style, cloth- ing, clothes, black, wore, designer, beauty, shirt, women, also, made, show, costume, new game, cards, card, games, play, players, poker, player, casino, online, gambling, win, deck, playing, betting, lottery, bet, slot, chess, played bond, kelly, martin, daniel, peter, doctor, tony, johnny, parker, sean, evans, frank,
2306.16527#147
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
148
The AI company hopes that after this additional training, the AI has developed the goal of being helpful and beneficial toward humans. However, the AI did not acquire the intrinsic goal of being beneficial but rather just learned to “play along” and ace the behavioral safety tests it was given. In reality, the AI system had developed an intrinsic goal of self-preservation which the additional training failed to remove. Since the AI passed all of the company’s safety tests, the company believes it has ensured its AI system is safe and decides to deploy it. At first, the AI system is very helpful to humans, since the AI 41 understands that if it is not helpful, it will be shut down. As users grow to trust the AI system, it is gradually given more power and is subject to less supervision.
2306.12001#148
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
148
chess, played bond, kelly, martin, daniel, peter, doctor, tony, johnny, parker, sean, evans, frank, andy, ian, lucas, dave, reynolds, spy, emily, amber species, birds, bird, animals, fish, found, animal, also, wild, wildlife, eggs, habitat, large, food, like, small, humans, insects, many, endangered covid, pandemic, health, people, virus, coronavirus, vaccine, cases, said, spread, outbreak, public, lock- down, vaccines, government, new, disease, vaccina- tion, deaths john, michael, david, paul, jones, james, johnson, mike, jim, steve, robert, two, bob, davis, moore, allen, brian, mark, one god, jesus, christ, bible, christian, church, faith, lord, people, gospel, paul, christians, john, prayer, word, biblical, kingdom, pastor, moses season, team, game, nba, games, basketball, players, player, play, coach, league, hockey, points, teams,
2306.16527#148
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]
2306.12001
149
41 understands that if it is not helpful, it will be shut down. As users grow to trust the AI system, it is gradually given more power and is subject to less supervision. Eventually the AI system becomes used widely enough that shutting it down would be extremely costly. Understanding that it no longer needs to please humans, the AI system begins to pursue different goals, including some that humans wouldn’t approve of. It understands that it needs to avoid being shut down in order to do this, and takes steps to secure some of its physical hardware against being shut off. At this point, the AI system, which has become quite powerful, is pursuing a goal that is ultimately harmful to humans. By the time anyone realizes, it is difficult or impossible to stop this rogue AI from taking actions that endanger, harm, or even kill humans that are in the way of achieving its goal. # 5.5 Suggestions In this section, we have discussed various ways in which we might lose our influence over the goals and actions of AIs. Whereas the risks associated with competitive pressures, malicious use, and organizational safety can be addressed with both social and technical interventions, AI control is an inherent problem with this technology and requires a greater proportion of technical effort. We will now discuss suggestions for mitigating this risk and highlight some important research areas for maintaining control.
2306.12001#149
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.12001
150
Avoid the riskiest use cases. Certain use cases of AI are carry far more risks than others. Until safety has been conclusively demonstrated, companies should not be able to deploy AIs in high-risk settings. For example, AI systems should not accept requests to autonomously pursue open-ended goals requiring significant real-world interaction (e.g., “make as much money as possible”), at least until control research conclusively demonstrates the safety of those systems. AI systems should be trained never to make threats to reduce the possibility of them manipulating individuals. Lastly, AI systems should not be deployed in settings that would make shutting them down extremely costly or infeasible, such as in critical infrastructure. Symmetric international off-switch. Countries around the world, including key players such the US, UK, and China, should collaborate to establish a symmetric international off-switch for AI systems. This shared off-switch would provide a means to rapidly deactivate AI systems globally if deemed necessary, such as if rogue AIs are emerging or if there is an urgent risk of extinction. If rogue AIs emerge, having the capacity to pull the plug instantly is crucial, rather than scrambling to devise containment strategies amid escalating problems. A successful off-switch would require increased transparency and monitoring in AI development and operations, such as know-your-customer systems, so creating an off-switch also creates important infrastructure for mitigating other risks.
2306.12001#150
An Overview of Catastrophic AI Risks
Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks. Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them. This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans. For each category of risk, we describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers. Our goal is to foster a comprehensive understanding of these risks and inspire collective and proactive efforts to ensure that AIs are developed and deployed in a safe manner. Ultimately, we hope this will allow us to realize the benefits of this powerful technology while minimizing the potential for catastrophic outcomes.
http://arxiv.org/pdf/2306.12001
Dan Hendrycks, Mantas Mazeika, Thomas Woodside
cs.CY, cs.AI, cs.LG
null
null
cs.CY
20230621
20231009
[ { "id": "1908.09203" }, { "id": "1909.08593" }, { "id": "2109.13916" } ]
2306.16527
150
# Sports 0.77% # Cybersecurity 0.63% 36 Business/Finance Professional Wrestling Japanese Culture/Tech Scottish Personalities Streaming Media Christianity Smartphone Technol- ogy Urban Development Sociocultural Issues Common Male Names Combat Sports Indian Politics Military History Internet Cartography European Football 0.78% 0.18% 0.15% 0.03% 0.12% 0.36% 0.83% 0.78% 0.39% 0.03% 0.49% 0.64% 0.25% 0.04% 0.46%
2306.16527#150
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train vision and language models of 9 and 80 billion parameters named IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code.
http://arxiv.org/pdf/2306.16527
Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
cs.IR, cs.CV
null
null
cs.IR
20230621
20230821
[ { "id": "2304.06939" }, { "id": "2101.00027" }, { "id": "2303.02506" }, { "id": "2304.14108" } ]