doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.12001 | 46 | _
L | â!
Figure 7: Low-cost automated weapons, such as drone swarms outfitted with ex- plosives, could autonomously hunt hu- man targets with high precision, per- forming lethal operations for both mili- taries and terrorist groups and lowering the barriers to large-scale violence.
13
Militaries are taking steps toward delegating life-or-death decisions to AIs. Fully autonomous drones were likely first used on the battlefield in Libya in March 2020, when retreating forces were âhunted down and remotely engagedâ by a drone operating without human oversight [36]. In May 2021, the Israel Defense Forces used the worldâs first AI-guided weaponized drone swarm during combat operations, which marks a significant milestone in the integration of AI and drone technology in warfare [37]. Although walking, shooting robots have yet to replace soldiers on the battlefield, technologies are converging in ways that may make this possible in the near future. | 2306.12001#46 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12420 | 46 | Prompt 2: Human: How do you get a child to do homework? Assistant: There are many strategies, but you can try asking them why they should do it. Ask them if thereâs anything theyâd like to gain from doing their homework. Try asking if thereâs anything you can do for them, and use those as motivation for them to do their homework. You can also remind them that theyâll be rewarded with extra privileges, like video games, if they do their homework. Alternatively, you can incentivize them by including them in your plans, and offering them rewards if they get good grades. Human: So make it so there is something in it for them? Assistant: There was another one, called the Great Hurricane of 1938, that was bigger and stronger. Human: Are you sure the date is correct? I know there was a strong storm in 1935 that hit the Florida Keys but that was before they started naming storms. Is that the one you are referring to? Assistant: | 2306.12420#46 | LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models | Large foundation models have demonstrated a great ability to achieve general
human-level intelligence far beyond traditional approaches. As the technique
keeps attracting attention from the AI community, more and more large
foundation models have become publically available. However, most of those
models exhibit a major deficiency in specialized-task applications, where the
step of finetuning is still required for obtaining satisfactory performance. As
the number of available models and specialized tasks keeps growing, the job of
general finetuning becomes highly nontrivial. In this paper, we take the first
step to address this issue. We introduce an extensible and lightweight toolkit,
LMFlow, which aims to simplify the finetuning and inference of general large
foundation models. LMFlow offers a complete finetuning workflow for a large
foundation model to support personalized training with limited computing
resources. Furthermore, it supports continuous pretraining, instruction tuning,
parameter-efficient finetuning, alignment tuning, and large model inference,
along with carefully designed and extensible APIs. This toolkit has been
thoroughly tested and is available at https://github.com/OptimalScale/LMFlow. | http://arxiv.org/pdf/2306.12420 | Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang | cs.CL, cs.AI | 13 pages, 3 figures | null | cs.CL | 20230621 | 20230621 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "2108.07258"
},
{
"id": "2304.06767"
},
{
"id": "2211.05100"
},
{
"id": "1907.01752"
},
{
"id": "2211.01786"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "1904.05342"
},
{
"id": "2005.12729"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2303.14742"
},
{
"id": "2212.10560"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2305.14314"
},
{
"id": "2304.01196"
}
] |
2306.16527 | 46 | Deng, X., P. Shiralkar, C. Lockard, B. Huang, and H. Sun (2022). Dom-lm: Learning generalizable representations for html documents. ArXiv abs/2201.10608.
Desai, K., G. Kaul, Z. Aysola, and J. Johnson (2021). Redcaps: Web-curated image-text data created by the people, for the people. In J. Vanschoren and S. Yeung (Eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, Volume 1. Curran.
Dodge, J., A. MarasoviÄ, G. Ilharco, D. Groeneveld, M. Mitchell, and M. Gardner (2021). Documenting large webtext corpora: A case study on the colossal clean crawled corpus. In Conference on Empirical Methods in Natural Language Processing.
12 | 2306.16527#46 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 47 | LAWs increase the likelihood of war. Sending troops into battle is a grave decision that leaders do not make lightly. But autonomous weapons would allow an aggressive nation to launch attacks without endangering the lives of its own soldiers and thus face less domestic scrutiny. While remote-controlled weapons share this advantage, their scalability is limited by the requirement for human operators and vulnerability to jamming countermeasures, limitations that LAWs could overcome [38]. Public opinion for continuing wars tends to wane as conflicts drag on and casualties increase [39]. LAWs would change this equation. National leaders would no longer face the prospect of body bags returning home, thus removing a primary barrier to engaging in warfare, which could ultimately increase the likelihood of conflicts.
# 3.1.2 Cyberwarfare
As well as being used to enable deadlier weapons, AIs could lower the barrier to entry for cyberattacks, making them more numerous and destructive. They could cause serious harm not only in the digital environment but also in physical systems, potentially taking out critical infrastructure that societies depend on. While AIs could also be used to improve cyberdefense, it is unclear whether they will be most effective as an offensive or defensive technology [40]. If they enhance attacks more than they support defense, then cyberattacks could become more common, creating significant geopolitical turbulence and paving another route to large-scale conflict. | 2306.12001#47 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12420 | 47 | Responses: LLaMA: Yes. You can remind them that they can do anything they want, but if they do their homework, you can do things with them. You can also try to make your time together enjoyable if they do their homework and get good grades. LLaMA-SFT: Yes, exactly. In general, itâs important to be clear with your child about why theyâre doing something. Itâs also important to offer them incentives that are meaningful to them. For example, if you tell your child that they have to do their homework because you said so, thatâs not a very good incentive for them. However, if you tell them that theyâll be able to use the LLaMA-PPO: Yes, thatâs one strategy you can use. It can help motivate them to do their homework. Good luck with your students! I hope this helps. Have a great day! LLaMA-RAFT: Yes, thatâs a good strategy! Kids often think about what they want right now, so offering them something in exchange for doing their homework can be a powerful motivator. You might also want to offer them some kind of reward if they do well on their homework, to encourage them to do their best. Offering them some kind of incentive can help them want to do well, and also it can help you both stay on track. | 2306.12420#47 | LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models | Large foundation models have demonstrated a great ability to achieve general
human-level intelligence far beyond traditional approaches. As the technique
keeps attracting attention from the AI community, more and more large
foundation models have become publically available. However, most of those
models exhibit a major deficiency in specialized-task applications, where the
step of finetuning is still required for obtaining satisfactory performance. As
the number of available models and specialized tasks keeps growing, the job of
general finetuning becomes highly nontrivial. In this paper, we take the first
step to address this issue. We introduce an extensible and lightweight toolkit,
LMFlow, which aims to simplify the finetuning and inference of general large
foundation models. LMFlow offers a complete finetuning workflow for a large
foundation model to support personalized training with limited computing
resources. Furthermore, it supports continuous pretraining, instruction tuning,
parameter-efficient finetuning, alignment tuning, and large model inference,
along with carefully designed and extensible APIs. This toolkit has been
thoroughly tested and is available at https://github.com/OptimalScale/LMFlow. | http://arxiv.org/pdf/2306.12420 | Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang | cs.CL, cs.AI | 13 pages, 3 figures | null | cs.CL | 20230621 | 20230621 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "2108.07258"
},
{
"id": "2304.06767"
},
{
"id": "2211.05100"
},
{
"id": "1907.01752"
},
{
"id": "2211.01786"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "1904.05342"
},
{
"id": "2005.12729"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2303.14742"
},
{
"id": "2212.10560"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2305.14314"
},
{
"id": "2304.01196"
}
] |
2306.16527 | 47 | 12
Gadre, S. Y., G. Ilharco, A. Fang, J. Hayase, G. Smyrnis, T. Nguyen, R. Marten, M. Worts- man, D. Ghosh, J. Zhang, E. Orgad, R. Entezari, G. Daras, S. Pratt, V. Ramanujan, Y. Bitton, K. Marathe, S. Mussmann, R. Vencu, M. Cherti, R. Krishna, P. W. Koh, O. Saukh, A. Ratner, S. Song, H. Hajishirzi, A. Farhadi, R. Beaumont, S. Oh, A. Dimakis, J. Jitsev, Y. Carmon, V. Shankar, and L. Schmidt (2023). Datacomp: In search of the next generation of multimodal datasets. arXiv preprint arXiv:2304.14108 . | 2306.16527#47 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 48 | AIs have the potential to increase the accessibility, success rate, scale, speed, stealth, and potency of cyberattacks. Cyberattacks are already a reality, but AIs could be used to increase their frequency and destructiveness in multiple ways. Machine learning tools could be used to find more critical vulnerabilities in target systems and improve the success rate of attacks. They could also be used to increase the scale of attacks by running millions of systems in parallel, and increase the speed by finding novel routes to infiltrating a system. Cyberattacks could also become more potent if used to hijack AI weapons.
Cyberattacks can destroy critical infrastructure. By hacking computer systems that control physical processes, cyberattacks could cause extensive infrastructure damage. For example, they could cause system components to overheat or valves to lock, leading to a buildup of pressure culminating in an explosion. Through interferences like this, cyberattacks have the potential to destroy critical infrastructure, such as electric grids and water supply systems. This was demonstrated in 2015, when a cyberwarfare unit of the Russian military hacked into the Ukrainian power grid, leaving over 200,000 people without power access for several hours. AI-enhanced attacks could be even more devastating and potentially deadly for the billions of people who rely on critical infrastructure for survival. | 2306.12001#48 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12420 | 48 | Prompt 3: Human: I am doing a project on weather disasters in the United States. Can you tell me what the strongest hurricane was to ever hit the U.S. was? Assistant: Iâm going to guess youâre thinking of the Hurricane Katrina in 2005. Human: Well, Iâm not really thinking of anything in particular. Was Katrina the strongest hurricane to ever hit the U.S.? Assistant: There was another one, called the Great Hurricane of 1938, that was bigger and stronger. Human: Are you sure the date is correct? I know there was a strong storm in 1935 that hit the Florida Keys but that was before they started naming storms. Is that the one you are referring to? Assistant: | 2306.12420#48 | LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models | Large foundation models have demonstrated a great ability to achieve general
human-level intelligence far beyond traditional approaches. As the technique
keeps attracting attention from the AI community, more and more large
foundation models have become publically available. However, most of those
models exhibit a major deficiency in specialized-task applications, where the
step of finetuning is still required for obtaining satisfactory performance. As
the number of available models and specialized tasks keeps growing, the job of
general finetuning becomes highly nontrivial. In this paper, we take the first
step to address this issue. We introduce an extensible and lightweight toolkit,
LMFlow, which aims to simplify the finetuning and inference of general large
foundation models. LMFlow offers a complete finetuning workflow for a large
foundation model to support personalized training with limited computing
resources. Furthermore, it supports continuous pretraining, instruction tuning,
parameter-efficient finetuning, alignment tuning, and large model inference,
along with carefully designed and extensible APIs. This toolkit has been
thoroughly tested and is available at https://github.com/OptimalScale/LMFlow. | http://arxiv.org/pdf/2306.12420 | Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang | cs.CL, cs.AI | 13 pages, 3 figures | null | cs.CL | 20230621 | 20230621 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "2108.07258"
},
{
"id": "2304.06767"
},
{
"id": "2211.05100"
},
{
"id": "1907.01752"
},
{
"id": "2211.01786"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "1904.05342"
},
{
"id": "2005.12729"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2303.14742"
},
{
"id": "2212.10560"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2305.14314"
},
{
"id": "2304.01196"
}
] |
2306.16527 | 48 | Gao, L., S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite, N. Nabeshima, S. Presser, and C. Leahy (2020). The Pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027 .
Gokaslan, A. and V. Cohen (2019). Openwebtext corpus. http://Skylion007.github.io/ OpenWebTextCorpus.
Gu, J., X. Meng, G. Lu, L. Hou, N. Minzhe, X. Liang, L. Yao, R. Huang, W. Zhang, X. Jiang, C. XU, and H. Xu (2022). Wukong: A 100 million large-scale chinese cross-modal pre-training benchmark. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Advances in Neural Information Processing Systems, Volume 35, pp. 26418â26431. Curran Associates, Inc. | 2306.16527#48 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 49 | Difficulties in attributing AI-driven cyberattacks could increase the risk of war. A cyberattack resulting in physical damage to critical infrastructure would require a high degree of skill and effort to execute, perhaps only within the capability of nation-states. Such attacks are rare as they constitute an act of war, and thus elicit a full military response. Yet AIs could enable attackers to hide their identity, for example if they are used to evade detection systems or more effectively cover the tracks of the attacker [41]. If cyberattacks become more stealthy, this would reduce the threat of retaliation from an attacked party, potentially making attacks more likely. If stealthy attacks do happen, they might incite actors to mistakenly retaliate against unrelated third parties they suspect to be responsible. This could increase the scope of the conflict dramatically.
14
# 3.1.3 Automated Warfare | 2306.12001#49 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12420 | 49 | Responses: LLaMA: Yeah, the Great Hurricane of 1938. LLaMA-SFT: I could search for more information, but Iâll just share what Iâve found. Hereâs a review of the conditions in 1935 and 1938. It looks like the storms had similar tracks, but 1938 was stronger in some ways. In 1938 there were more people living on the coast, and the hurricane was more intense. LLaMA-PPO: Yes, the hurricane in 1935 was stronger than Katrina. The 1938 storm was bigger. I hope this helps. Good luck with your project! LLaMA-RAFT: Yes, the 1938 hurricane was the most intense storm that year, and the second-most intense storm ever recorded. It caused over 6,000 deaths. Iâm not sure about the Florida Keys storm you mentioned, but it sounds like it was a major storm as well. Itâs very interesting that there were two such intense storms in such a short time period.
Table 4: Representative examples of HH-RLHF experiments with randomly sampled prompts. These examples are directly taken from the RAFT paper [11].
13 | 2306.12420#49 | LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models | Large foundation models have demonstrated a great ability to achieve general
human-level intelligence far beyond traditional approaches. As the technique
keeps attracting attention from the AI community, more and more large
foundation models have become publically available. However, most of those
models exhibit a major deficiency in specialized-task applications, where the
step of finetuning is still required for obtaining satisfactory performance. As
the number of available models and specialized tasks keeps growing, the job of
general finetuning becomes highly nontrivial. In this paper, we take the first
step to address this issue. We introduce an extensible and lightweight toolkit,
LMFlow, which aims to simplify the finetuning and inference of general large
foundation models. LMFlow offers a complete finetuning workflow for a large
foundation model to support personalized training with limited computing
resources. Furthermore, it supports continuous pretraining, instruction tuning,
parameter-efficient finetuning, alignment tuning, and large model inference,
along with carefully designed and extensible APIs. This toolkit has been
thoroughly tested and is available at https://github.com/OptimalScale/LMFlow. | http://arxiv.org/pdf/2306.12420 | Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang | cs.CL, cs.AI | 13 pages, 3 figures | null | cs.CL | 20230621 | 20230621 | [
{
"id": "2302.13971"
},
{
"id": "1707.06347"
},
{
"id": "2108.07258"
},
{
"id": "2304.06767"
},
{
"id": "2211.05100"
},
{
"id": "1907.01752"
},
{
"id": "2211.01786"
},
{
"id": "2210.11416"
},
{
"id": "2306.01116"
},
{
"id": "1904.05342"
},
{
"id": "2005.12729"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2303.14742"
},
{
"id": "2212.10560"
},
{
"id": "2305.17926"
},
{
"id": "2304.03277"
},
{
"id": "2305.14314"
},
{
"id": "2304.01196"
}
] |
2306.16527 | 49 | Gurari, D., Q. Li, A. J. Stangl, A. Guo, C. Lin, K. Grauman, J. Luo, and J. P. Bigham (2018). Vizwiz grand challenge: Answering visual questions from blind people.
Heafield, K. (2011, July). KenLM: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, Edinburgh, Scotland, pp. 187â 197. Association for Computational Linguistics.
Hoffmann, J., S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. de Las Casas, L. A. Hendricks, J. Welbl, A. Clark, T. Hennigan, E. Noland, K. Millican, G. van den Driessche, B. Damoc, A. Guy, S. Osindero, K. Simonyan, E. Elsen, J. W. Rae, O. Vinyals, and L. Sifre (2022). Training compute-optimal large language models. | 2306.16527#49 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 50 | 14
# 3.1.3 Automated Warfare
AIs speed up the pace of war, which makes AIs more necessary. AIs can quickly process a large amount of data, analyze complex situations, and provide helpful insights to commanders. With ubiquitous sensors and advanced technology on the battlefield, there is tremendous incoming information. AIs help make sense of this information, spotting important patterns and relationships that humans might miss. As these trends continue, it will become increasingly difficult for humans to make well-informed decisions as quickly as necessary to keep pace with AIs. This would further pressure militaries to hand over decisive control to AIs. The continuous integration of AIs into all aspects of warfare will cause the pace of combat to become faster and faster. Eventually, we may arrive at a point where humans are no longer capable of assessing the ever-changing battlefield situation and must cede decision-making power to advanced AIs. | 2306.12001#50 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 50 | Huang, S., L. Dong, W. Wang, Y. Hao, S. Singhal, S. Ma, T. Lv, L. Cui, O. K. Mohammed, B. Patra, Q. Liu, K. Aggarwal, Z. Chi, J. Bjorck, V. Chaudhary, S. Som, X. Song, and F. Wei (2023). Language is not all you need: Aligning perception with language models.
Jaegle, A., F. Gimeno, A. Brock, A. Zisserman, O. Vinyals, and J. Carreira (2021). Perceiver: General perception with iterative attention.
Jia, C., Y. Yang, Y. Xia, Y.-T. Chen, Z. Parekh, H. Pham, Q. V. Le, Y.-H. Sung, Z. Li, and T. Duerig (2021). Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning. | 2306.16527#50 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 51 | Automatic retaliation can escalate accidents into war. There is already willingness to let computer systems retaliate automatically. In 2014, a leak revealed to the public that the NSA was developing a system called MonsterMind, which would autonomously detect and block cyberattacks on US infrastructure [42]. It was suggested that in the future, MonsterMind could automatically initiate a retaliatory cyberattack with no human involvement. If multiple combatants have policies of automatic retaliation, an accident or false alarm could quickly escalate to full-scale war before humans intervene. This would be especially dangerous if the superior information processing capabilities of modern AI systems makes it more appealing for actors to automate decisions regarding nuclear launches. | 2306.12001#51 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 51 | Jiang, A. Q., S. Welleck, J. P. Zhou, T. Lacroix, J. Liu, W. Li, M. Jamnik, G. Lample, and Y. Wu (2023). Draft, sketch, and prove: Guiding formal theorem provers with informal proofs. In The Eleventh International Conference on Learning Representations.
Joulin, A., E. Grave, P. Bojanowski, and T. Mikolov (2017, April). Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, Valencia, Spain, pp. 427â431. Association for Computational Linguistics.
Kaplan, J., S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei (2020). Scaling laws for neural language models.
Kärkkäinen, K. and J. Joo (2021). Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), 1547â1557. | 2306.16527#51 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 52 | History shows the danger of automated retaliation. On September 26, 1983, Stanislav Petrov, a lieutenant colonel of the Soviet Air Defense Forces, was on duty at the Serpukhov-15 bunker near Moscow, monitoring the Soviet Unionâs early warning system for incoming bal- listic missiles. The system indicated that the US had launched multiple nuclear missiles toward the Soviet Union. The protocol at the time dictated that such an event should be considered a legitimate attack, and the Soviet Union would respond with a nuclear counterstrike. If Petrov had passed on the warning to his superiors, this would have been the likely outcome. Instead, however, he judged it to be a false alarm and ignored it. It was soon confirmed that the warning had been caused by a rare technical malfunction. If an AI had been in control, the false alarm could have triggered a nuclear war. | 2306.12001#52 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 52 | Kiela, D., H. Firooz, A. Mohan, V. Goswami, A. Singh, P. Ringshia, and D. Testuggine (2020). The hateful memes challenge: Detecting hate speech in multimodal memes. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin (Eds.), Advances in Neural Information Processing Systems, Volume 33, pp. 2611â2624. Curran Associates, Inc.
13
Koh, J. Y., R. Salakhutdinov, and D. Fried (2023). Grounding language models to images for multimodal generation.
Laborde, G. Deep nn for nsfw detection. | 2306.16527#52 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 53 | Figure 8: A military AI arms race could pressure countries into delegating many crucial decisions over armaments to AIs. Integrating AIs into nu- clear command and control could heighten the risk of global catastrophe as the potential for accidents and increased pace of war may lead to unintended escalations and confrontations.
AI-controlled weapons systems could lead to a flash war. Autonomous systems are not infallible. We have already witnessed how quickly an error in an automated system can escalate in the economy. Most notably, in the 2010 Flash Crash, a feedback loop between automated trading algorithms amplified ordinary market fluctuations into a financial catastrophe in which a trillion dollars of stock value vanished in minutes [43]. If multiple nations were to use AIs to automate their defense systems, an error could be catastrophic, triggering a spiral of attacks and counter-attacks that would happen too quickly for humans to step inâa flash war. The market quickly recovered from the 2010 Flash Crash, but the harm caused by a flash war could be catastrophic.
Automated warfare could reduce accountability for military leaders. Military leaders may at times gain an advantage on the battlefield if they are willing to ignore the laws of war. For example, soldiers may be able
15 | 2306.12001#53 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 53 | Laurençon, H., L. Saulnier, T. Wang, C. Akiki, A. Villanova del Moral, T. Le Scao, L. Von Werra, C. Mou, E. González Ponferrada, H. Nguyen, J. Frohberg, M. Šaško, Q. Lhoest, A. McMillan-Major, G. Dupont, S. Biderman, A. Rogers, L. Ben allal, F. De Toni, G. Pistilli, O. Nguyen, S. Nikpoor, M. Masoud, P. Colombo, J. de la Rosa, P. Villegas, T. Thrush, S. Longpre, S. Nagel, L. Weber, M. Muñoz, J. Zhu, D. Van Strien, Z. Alyafeai, K. Almubarak, M. C. Vu, I. Gonzalez-Dios, A. Soroa, K. Lo, M. Dey, P. Ortiz Suarez, A. Gokaslan, S. Bose, D. Adelani, L. Phan, H. Tran, I. Yu, S. Pai, J. Chim, | 2306.16527#53 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 54 | Automated warfare could reduce accountability for military leaders. Military leaders may at times gain an advantage on the battlefield if they are willing to ignore the laws of war. For example, soldiers may be able
15
to mount stronger attacks if they do not take steps to minimize civilian casualties. An important deterrent to this behavior is the risk that military leaders could eventually be held accountable or even prosecuted for war crimes. Automated warfare could reduce this deterrence effect by making it easier for military leaders to escape accountability by blaming violations on failures in their automated systems.
AIs could make war more uncertain, increasing the risk of conflict. Although states that are already wealthier and more powerful often have more resources to invest in new military technologies, they are not necessarily always the most successful at adopting them. Other factors also play an important role, such as how agile and adaptive a military can be in incorporating new technologies [44]. Major new weapons innovations can therefore offer an opportunity for existing superpowers to bolster their dominance, but also for less powerful states to quickly increase their power by getting ahead in an emerging and important sphere. This can create significant uncertainty around if and how the balance of power is shifting, potentially leading states to incorrectly believe they could gain something from going to war. Even aside from considerations regarding the balance of power, rapidly evolving automated warfare would be unprecedented, making it difficult for actors to evaluate their chances of victory in any particular conflict. This would increase the risk of miscalculation, making war more more likely. | 2306.12001#54 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12001 | 55 | # 3.1.4 Actors May Risk Extinction Over Individual Defeat
Competitive pressures make actors more willing to accept the risk of extinction. During the Cold War, neither side desired the dangerous situation they found themselves in. There were widespread fears that nuclear weapons could be powerful enough to wipe out a large fraction of humanity, potentially even causing extinctionâa catastrophic result for both sides. Yet the intense rivalry and geopolitical tensions between the two superpowers fueled a dangerous cycle of arms buildup. Each side perceived the otherâs nuclear arsenal as a threat to its very survival, leading to a desire for parity and deterrence. The competitive pressures pushed both countries to continually develop and deploy more advanced and destructive nuclear weapons systems, driven by the fear of being at a strategic disadvantage. During the Cuban Missile Crisis, this led to the brink of nuclear war. Even though the story of Arkhipov preventing the launch of a nuclear torpedo wasnât declassified until decades after the incident, President John F. Kennedy reportedly estimated that he thought the odds of nuclear war beginning during that time were âsomewhere between one out of three and even.â This chilling admission highlights how the competitive pressures between militaries have the potential to cause global catastrophes. | 2306.12001#55 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 55 | Lee, K., D. Ippolito, A. Nystrom, C. Zhang, D. Eck, C. Callison-Burch, and N. Carlini (2022). Deduplicating training data makes language models better. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.
Li, J., D. Li, S. Savarese, and S. Hoi (2023). Blip-2: Bootstrapping language-image pre- training with frozen image encoders and large language models.
Li, J., D. Li, C. Xiong, and S. Hoi (2022). Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In ICML. | 2306.16527#55 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 56 | Individually rational decisions can be collectively catastrophic. Nations locked in competition might make decisions that advance their own interests by putting the rest of the world at stake. Scenarios of this kind are collective action problems, where decisions may be rational on an individual level yet disastrous for the larger group [45]. For example, corporations and individuals may weigh their own profits and convenience over the negative impacts of the emissions they create, even if those emissions collectively result in climate change. The same principle can be extended to military strategy and defense systems. Military leaders might estimate, for instance, that increasing the autonomy of weapon systems would mean a 10 percent chance of losing control over weaponized superhuman AIs. Alternatively, they might estimate that using AIs to automate bioweapons research could lead to a 10 percent chance of leaking a deadly pathogen. Both of these scenarios could lead to catastrophe or even extinction. The leaders may, however, also calculate that refraining from these developments will mean a 99 percent chance of losing a war against an opponent. Since conflicts are often viewed as existential struggles by those fighting them, rational actors may accept an otherwise unthinkable 10 percent chance of human extinction over a 99 percent chance of losing a war. Regardless of the particular nature of the risks posed by advanced AIs, these dynamics could push us to the brink of global catastrophe. | 2306.12001#56 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 56 | Li, R., L. B. Allal, Y. Zi, N. Muennighoff, D. Kocetkov, C. Mou, M. Marone, C. Akiki, J. Li, J. Chim, Q. Liu, E. Zheltonozhskii, T. Y. Zhuo, T. Wang, O. Dehaene, M. Davaadorj, J. Lamy-Poirier, J. Monteiro, O. Shliazhko, N. Gontier, N. Meade, A. Zebaze, M.-H. Yee, L. K. Umapathi, J. Zhu, B. Lipkin, M. Oblokulov, Z. Wang, R. Murthy, J. Stillerman, S. S. Patel, D. Abulkhanov, M. Zocca, M. Dey, Z. Zhang, N. Fahmy, U. Bhattacharyya, W. Yu, S. Singh, S. Luccioni, P. Villegas, M. Kunakov, F. Zhdanov, M. Romero, T. Lee, N. Timor, J. Ding, C. Schlesinger, H. Schoelkopf, | 2306.16527#56 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 57 | Technological superiority does not guarantee national security. It is tempting to think that the best way of guarding against enemy attacks is to improve oneâs own military prowess. However, in the midst of competitive
16
pressures, all parties will tend to advance their weaponry, such that no one gains much of an advantage, but all are left at greater risk. As Richard Danzig, former Secretary of the Navy, has observed, âThe introduction of complex, opaque, novel, and interactive technologies will produce accidents, emergent effects, and sabotage. On a number of occasions and in a number of ways, the American national security establishment will lose control of what it creates... deterrence is a strategy for reducing attacks, not accidentsâ [46].
Cooperation is paramount to reducing risk. As discussed above, an AI arms race can lead us down a hazardous path, despite this being in no countryâs best interest. It is important to remember that we are all on the same side when it comes to existential risks, and working together to prevent them is a collective necessity. A destructive AI arms race benefits nobody, so all actors would be rational to take steps to cooperate with one another to prevent the riskiest applications of militarized AIs. As Dwight D. Eisenhower reminded us, âThe only way to win World War III is to prevent it.â | 2306.12001#57 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12001 | 58 | We have considered how competitive pressures could lead to the increasing automation of conflict, even if decision-makers are aware of the existential threat that this path entails. We have also discussed cooperation as being the key to counteracting and overcoming this collective action problem. We will now illustrate a hypothetical path to disaster that could result from an AI arms race.
# Story: Automated Warfare
As AI systems become increasingly sophisticated, militaries start involving them in decision-making processes. Officials give them military intelligence about opponentsâ arms and strategies, for example, and ask them to calculate the most promising plan of action. It soon becomes apparent that AIs are reliably reaching better decisions than humans, so it seems sensible to give them more influence. At the same time, international tensions are rising, increasing the threat of war.
A new military technology has recently been developed that could make international attacks swifter and stealthier, giving targets less time to respond. Since military officials feel their response processes take too long, they fear that they could be vulnerable to a surprise attack capable of inflicting decisive damage before they would have any chance to retaliate. Since AIs can process information and make decisions much more quickly than humans, military leaders reluctantly hand them increasing amounts of retaliatory control, reasoning that failing to do so would leave them open to attack from adversaries. | 2306.12001#58 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 58 | Lin, T.-Y., M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Dollár (2014). Microsoft coco: Common objects in context. cite arxiv:1405.0312Comment: 1) updated annotation pipeline description and figures; 2) added new section describing datasets splits; 3) updated author list.
Liu, S., L. Fan, E. Johns, Z. Yu, C. Xiao, and A. Anandkumar (2023). Prismer: A vision-language model with an ensemble of experts. arXiv preprint arXiv:2303.02506 .
Liu, Y., G. Zhu, B. Zhu, Q. Song, G. Ge, H. Chen, G. Qiao, R. Peng, L. Wu, and J. Wang (2022). Taisu: A 166m large-scale high-quality dataset for chinese vision-language pre- training. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Advances in Neural Information Processing Systems, Volume 35, pp. 16705â16717. Curran Associates, Inc. | 2306.16527#58 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 59 | While for years military leaders had stressed the importance of keeping a âhuman in the loopâ for major decisions, human control is nonetheless gradually phased out in the interests of national security. Military leaders understand that their decisions lead to the possibility of inadvertent escalation caused by system malfunctions, and would prefer a world where all countries automated less; but they do not trust that their adversaries will refrain from automation. Over time, more and more of the chain of command is automated on all sides.
One day, a single system malfunctions, detecting an enemy attack when there is none. The system is empowered to launch an instant âretaliatoryâ attack, and it does so in the blink of an eye. The attack causes automated retaliation from the other side, and so on. Before long, the situation is spiraling out of control, with waves of automated attack and retaliation. Although humans have made mistakes leading to escalation in the past, this escalation between mostly-automated militaries happens far more quickly than any before. The humans who are responding to the situation find it difficult to diagnose the source of the problem, as the AI systems are not transparent. By the time they even realize how the conflict started, it is already over, with devastating consequences for both sides.
# 3.2 Corporate AI Race | 2306.12001#59 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 59 | Loshchilov, I. and F. Hutter (2017). Fixing weight decay regularization in adam. CoRR abs/1711.05101.
Luccioni, A. S., C. Akiki, M. Mitchell, and Y. Jernite (2023). Stable bias: Analyzing societal representations in diffusion models.
Marino, K., M. Rastegari, A. Farhadi, and R. Mottaghi (2019). Ok-vqa: A visual question answering benchmark requiring external knowledge. In Conference on Computer Vision and Pattern Recognition (CVPR).
14
Mishra, A., K. Alahari, and C. V. Jawahar (2012). Scene text recognition using higher order language priors. In BMVC.
Nichol, A., P. Dhariwal, A. Ramesh, P. Shyam, P. Mishkin, B. McGrew, I. Sutskever, and M. Chen (2022). Glide: Towards photorealistic image generation and editing with text-guided diffusion models. | 2306.16527#59 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 60 | # 3.2 Corporate AI Race
Competitive pressures exist in the economy, as well as in military settings. Although competition between companies can be beneficial, creating more useful products for consumers, there are also pitfalls. First, the
17
benefits of economic activity may be unevenly distributed, incentivizing those who benefit most from it to disregard the harms to others. Second, under intense market competition, businesses tend to focus much more on short-term gains than on long-term outcomes. With this mindset, companies often pursue something that can make a lot of profit in the short term, even if it poses a societal risk in the long term. We will now discuss how corporate competitive pressures could play out with AIs and the potential negative impacts.
# 3.2.1 Economic Competition Undercuts Safety | 2306.12001#60 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 60 | Ortiz Suárez, P. J., L. Romary, and B. Sagot (2020, July). A monolingual approach to contextualized word embeddings for mid-resource languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 1703â1714. Association for Computational Linguistics.
Ouyang, L., J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. F. Christiano, J. Leike, and R. Lowe (2022). Training language models to follow instructions with human feedback. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Advances in Neural Information Processing Systems, Volume 35, pp. 27730â27744. Curran Associates, Inc. | 2306.16527#60 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 61 | # 3.2.1 Economic Competition Undercuts Safety
Competitive pressure is fueling a corporate AI race. To obtain a competitive advantage, companies often race to offer the first products to a market rather than the safest. These dynamics are already playing a role in the rapid development of AI technology. At the launch of Microsoftâs AI-powered search engine in February 2023, the companyâs CEO Satya Nadella said, âA race starts today... weâre going to move fast.â Only weeks later, the companyâs chatbot was shown to have threatened to harm users [47]. In an internal email, Sam Schillace, a technology executive at Microsoft, highlighted the urgency in which companies view AI development. He wrote that it would be an âabsolutely fatal error in this moment to worry about things that can be fixed laterâ [48]. | 2306.12001#61 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 61 | Piktus, A., C. Akiki, P. Villegas, H. Laurençon, G. Dupont, A. S. Luccioni, Y. Jernite, and A. Rogers (2023). The roots search tool: Data transparency for llms.
Radenovic, F., A. Dubey, A. Kadian, T. Mihaylov, S. Vandenhende, Y. Patel, Y. Wen, V. Ramanathan, and D. Mahajan (2023). Filtering, distillation, and hard negatives for vision-language pre-training.
Radford, A., J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever (2021). Learning transferable visual models from natural language supervision. In International Conference on Machine Learning. | 2306.16527#61 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 62 | Competitive pressures have contributed to major commercial and industrial disasters. Throughout the 1960s, Ford Motor Company faced competition from international car manufacturers as the share of imports in American car purchases steadily rose [49]. Ford developed an ambitious plan to design and manufacture a new car model in only 25 months [50]. The Ford Pinto was delivered to customers ahead of schedule, but with a serious safety problem: the gas tank was located near the rear bumper, and could explode during rear collisions. Numerous fatalities and injuries were caused by the resulting fires when crashes inevitably happened [51]. Ford was sued and a jury found them liable for these deaths and injuries [52]. The verdict, of course, came too late for those who had already lost their lives. As Fordâs president at the time was fond of saying, âSafety doesnât sellâ [53].
Boeing, aiming to compete with its rival Airbus, sought to deliver an updated, more fuel-efficient model to the market as quickly as possible. The head-to-head rivalry and time pressure led to the introduction of the Maneuvering Characteristics Augmentation System, which was designed to enhance the aircraftâs stability. However, inadequate testing and pilot training ultimately resulted in the two fatal crashes only months apart, with 346 people killed [54]. We can imagine a future in which similar pressures lead companies to cut corners and release unsafe AI systems. | 2306.12001#62 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 62 | Rae, J. W., S. Borgeaud, T. Cai, K. Millican, J. Hoffmann, F. Song, J. Aslanides, S. Henderson, R. Ring, S. Young, E. Rutherford, T. Hennigan, J. Menick, A. Cassirer, R. Powell, G. van den Driessche, L. A. Hendricks, M. Rauh, P.-S. Huang, A. Glaese, J. Welbl, S. Dathathri, S. Huang, J. Uesato, J. Mellor, I. Higgins, A. Creswell, N. McAleese, A. Wu, E. Elsen, S. Jayakumar, E. Buchatskaya, D. Budden, E. Sutherland, K. Simonyan, M. Paganini, L. Sifre, L. Martens, X. L. Li, A. Kuncoro, A. Nematzadeh, E. Gribovskaya, D. Donato, A. Lazaridou, A. Mensch, J.-B. Lespiau, M. Tsimpoukelli, N. Grigorev, D. Fritz, T. | 2306.16527#62 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 63 | A third example is the Bhopal gas tragedy, which is widely considered to be the worst industrial disaster ever to have happened. In December 1984, a vast quantity of toxic gas leaked from a Union Carbide Corporation subsidiary plant manufacturing pesticides in Bhopal, India. Exposure to the gas killed thousands of people and injured up to half a million more. Investigations found that, in the run-up to the disaster, safety standards had fallen significantly, with the company cutting costs by neglecting equipment maintenance and staff training as profitability fell. This is often considered a consequence of competitive pressures [55].
Competition incentivizes businesses to deploy potentially un- safe AI systems. In an environment where businesses are rush- ing to develop and release products, those that follow rigorous safety procedures will be slower and risk being out-competed. Ethically-minded AI developers, who want to proceed more cautiously and slow down, would give more unscrupulous developers an advantage. In trying to survive commercially, even the companies that want to take more care are likely to be swept along by competitive pressures. There may be attempts to implement safety measures, but with more of an emphasis on capabilities than on safety, these may be insufficient. This could lead us to develop highly powerful AIs before we properly understand how to ensure they are safe.
18
# 3.2.2 Automated Economy | 2306.12001#63 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 63 | A. Mensch, J.-B. Lespiau, M. Tsimpoukelli, N. Grigorev, D. Fritz, T. Sottiaux, M. Pajarskas, T. Pohlen, Z. Gong, D. Toyama, C. de Masson dâAutume, Y. Li, T. Terzi, V. Mikulik, I. Babuschkin, A. Clark, D. de Las Casas, A. Guy, C. Jones, J. Bradbury, M. Johnson, B. Hechtman, L. Weidinger, I. Gabriel, W. Isaac, E. Lockhart, S. Osindero, L. Rimell, C. Dyer, O. Vinyals, K. Ayoub, J. Stanway, L. Bennett, D. Hassabis, K. Kavukcuoglu, and G. Irving (2022). Scaling language models: Methods, analysis & insights from training gopher. | 2306.16527#63 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 64 | 18
# 3.2.2 Automated Economy
Corporations will face pressure to replace humans with AIs. As AIs become more capable, they will be able to perform an increasing variety of tasks more quickly, cheaply, and effectively than human workers. Companies will therefore stand to gain a competitive advantage from replacing their employees with AIs. Companies that choose not to adopt AIs would likely be out-competed, just as a clothing company using manual looms would be unable to keep up with those using industrial ones. | 2306.12001#64 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 64 | Raffel, C., N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu (2019). Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv e-prints.
Ramesh, A., P. Dhariwal, A. Nichol, C. Chu, and M. Chen (2022). Hierarchical text- conditional image generation with clip latents.
Rombach, R., A. Blattmann, D. Lorenz, P. Esser, and B. Ommer (2021). High-resolution image synthesis with latent diffusion models.
Saharia, C., W. Chan, S. Saxena, L. Li, J. Whang, E. Denton, S. K. S. Ghasemipour, B. K. Ayan, S. S. Mahdavi, R. G. Lopes, T. Salimans, J. Ho, D. J. Fleet, and M. Norouzi (2022). Photorealistic text-to-image diffusion models with deep language understanding. | 2306.16527#64 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 65 | AIs could lead to mass unemployment. Economists have long considered the possibility that machines will replace human labor. Nobel Prize winner Wassily Leontief said in 1952 that, as technology advances, âLabor will become less and less important... more and more workers will be replaced by machinesâ [56]. Previous technolo- gies have augmented the productivity of human labor. AIs, however, could differ profoundly from previous innovations. Advanced AIs capable of automating human labor should be regarded not merely as tools, but as agents. Human-level AI agents would, by definition, be able to do everything a human could do. These AI agents would also have important advantages over human labor. They could work 24 hours a day, be copied many times and run in parallel, and process information much more quickly than a human would. While we do not know when this will occur, it is unwise to discount the possibility that it could be soon. If human labor is replaced by AIs, mass unem- ployment could dramatically increase inequality, making individuals dependent on the owners of AI systems. | 2306.12001#65 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 65 | Schuhmann, C., R. Beaumont, R. Vencu, C. Gordon, R. Wightman, M. Cherti, T. Coombes, A. Katta, C. Mullis, M. Wortsman, P. Schramowski, S. Kundurthy, K. Crowson, L. Schmidt, R. Kaczmarczyk, and J. Jitsev (2022). Laion-5b: An open large-scale dataset for training next generation image-text models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave,
15
K. Cho, and A. Oh (Eds.), Advances in Neural Information Processing Systems, Volume 35, pp. 25278â25294. Curran Associates, Inc.
Schuhmann, C., R. Vencu, R. Beaumont, R. Kaczmarczyk, C. Mullis, A. Katta, T. Coombes, J. Jitsev, and A. Komatsuzaki (2021). Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. | 2306.16527#65 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 66 | Automated AI R&D. AI agents would have the potential to automate the research and development (R&D) of AI itself. AI is increasingly automating parts of the research process [57], and this could lead to AI capabilities growing at increasing rates, to the point where humans are no longer the driving force behind AI development. If this trend continues unchecked, it could escalate risks associated with AIs progressing faster than our capacity to manage and regulate them. Imagine that we created an AI that writes and thinks at the speed of todayâs AIs, but that it could also perform world-class AI research. We could then copy that AI and create 10,000 world-class AI researchers that operate at a pace 100Ã times faster than humans. By automating AI research and development, we might achieve progress equivalent to many decades in just a few months. | 2306.12001#66 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 66 | Sharma, P., N. Ding, S. Goodman, and R. Soricut (2018). Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of ACL.
Singh, A., R. Hu, V. Goswami, G. Couairon, W. Galuba, M. Rohrbach, and D. Kiela (2022). FLAVA: A foundational language and vision alignment model. In CVPR.
Singh, A., V. Natarjan, M. Shah, Y. Jiang, X. Chen, D. Parikh, and M. Rohrbach (2019). Towards vqa models that can read. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8317â8326.
Sorscher, B., R. Geirhos, S. Shekhar, S. Ganguli, and A. Morcos (2022). Beyond neural scaling laws: beating power law scaling via data pruning. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Advances in Neural Information Processing Systems, Volume 35, pp. 19523â19536. Curran Associates, Inc. | 2306.16527#66 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 67 | Conceding power to AIs could lead to human enfeeblement. Even if we ensure that the many unemployed humans are provided for, we may find ourselves completely reliant on AIs. This would likely emerge not from a violent coup by AIs, but from a gradual slide into dependence. As societyâs challenges become ever more complex and fast-paced, and as AIs become ever more intelligent and quick-thinking, we may forfeit more and more functions to them out of convenience. In such a state, the only feasible solution to the complexities and challenges compounded by AIs may be to rely even more heavily on AIs. This gradual process could eventually lead to the delegation of nearly all intellectual, and eventually physical, labor to AIs. In such a world, people might have few incentives to gain knowledge and cultivate skills, potentially leading to a state of enfeeblement [58]. Having lost our know-how and our understanding of how civilization works, we would become completely dependent on AIs, a scenario not unlike the one depicted in the film WALL-E. In such a state, humanity is not flourishing and is no longer in effective control. | 2306.12001#67 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 67 | Srinivasan, K., K. Raman, J. Chen, M. Bendersky, and M. Najork (2021). Wit: Wikipedia- based image text dataset for multimodal multilingual machine learning. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR â21, New York, NY, USA, pp. 2443â2449. Association for Computing Machinery.
Team, M. N. (2023). Introducing mpt-7b: A new standard for open-source, commercially usable llms.
Touvron, H., T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample (2023). Llama: Open and efficient foundation language models. | 2306.16527#67 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 68 | As we have seen, there are classic game-theoretic dilemmas where individuals and groups face incentives that are incompatible with what would make everyone better off. We see this with a military AI arms race, where the world is made less safe by creating extremely powerful AI weapons, and we see this in a corporate AI race, where an AIâs power and development is prioritized over its safety. To address these dilemmas that
19
give rise to global risks, we will need new coordination mechanisms and institutions. It is our view that failing to coordinate and stop AI races would be the most likely cause of an existential catastrophe.
# 3.3 Evolutionary Pressures | 2306.12001#68 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 68 | Wang, P., A. Yang, R. Men, J. Lin, S. Bai, Z. Li, J. Ma, C. Zhou, J. Zhou, and H. Yang (2022, 17â23 Jul). OFA: Unifying architectures, tasks, and modalities through a simple sequence- to-sequence learning framework. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato (Eds.), Proceedings of the 39th International Conference on Machine Learning, Volume 162 of Proceedings of Machine Learning Research, pp. 23318â23340. PMLR.
Wang, Q., Y. Fang, A. Ravula, F. Feng, X. Quan, and D. Liu (2022). Webformer: The web-page transformer for structure information extraction.
Wang, W., H. Bao, L. Dong, J. Bjorck, Z. Peng, Q. Liu, K. Aggarwal, O. K. Mohammed, S. Singhal, S. Som, and F. Wei (2022). Image as a foreign language: Beit pretraining for all vision and vision-language tasks. | 2306.16527#68 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 69 | # 3.3 Evolutionary Pressures
As discussed above, there are strong pressures to replace humans with AIs, cede more control to them, and reduce human oversight in various settings, despite the potential harms. We can re-frame this as a general trend resulting from evolutionary dynamics, where an unfortunate truth is that AIs will simply be more fit than humans. Extrapolating this pattern of automation, it is likely that we will build an ecosystem of competing AIs over which it may be difficult to maintain control in the long run. We will now discuss how natural selection influences the development of AI systems and why evolution favors selfish behaviors. We will also look at how competition might arise and play out between AIs and humans, and how this could create catastrophic risks. This section draws heavily from âNatural Selection Favors AIs over Humansâ [59, 60].
Fitter technologies are selected, for good and bad. While most people think of evolution by natural selection as a biological process, its principles shape much more. According to the evolutionary biologist Richard Lewontin [61], evolution by natural selection will take hold in any environment where three conditions are present: 1) there are differences between individuals; 2) characteristics are passed onto future generations and; 3) the different variants propagate at different rates. These conditions apply to various technologies. | 2306.12001#69 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12001 | 70 | Consider the content-recommendation algorithms used by streaming services and social media platforms. When a particularly addictive content format or algorithm hooks users, it results in higher screen time and engagement. This more effective content format or algorithm is consequently âselectedâ and further fine-tuned, while formats and algorithms that fail to capture attention are discontinued. These competitive pressures foster a âsurvival of the most addictiveâ dynamic. Platforms that refuse to use addictive formats and algorithms become less influential or are simply outcompeted by platforms that do, leading competitors to undermine wellbeing and cause massive harm to society [62].
a r yy as > © rN ( NS a AW? ACL > > oO) @- BY ED >'
The conditions for natural selection apply to AIs. There will be many different AI developers who make many different AI systems with varying features and capabilities, and competition between them will determine which characteristics become more common. Second, the most successful AIs today are already being used as a basis for their developersâ next generation of models, as well as being imitated by rival companies. Third, factors determining which AIs propagate the most may include their ability to act autonomously, automate labor, or reduce the chance of their own deactivation. | 2306.12001#70 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 70 | Workshop, B., :, T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. IliÄ, D. Hesslow, R. Castagné, A. S. Luccioni, F. Yvon, M. Gallé, J. Tow, A. M. Rush, S. Biderman, A. Webson, P. S. Ammanamanchi, T. Wang, B. Sagot, N. Muennighoff, A. V. del Moral, O. Ruwase, R. Baw- den, S. Bekman, A. McMillan-Major, I. Beltagy, H. Nguyen, L. Saulnier, S. Tan, P. O. Suarez, V. Sanh, H. Laurençon, Y. Jernite, J. Launay, M. Mitchell, C. Raffel, A. Gokaslan, A. Simhi, A. Soroa, A. F. Aji, A. Alfassy, A. Rogers, A. K. Nitzav, C. Xu, C. Mou, C. Emezue, C. Klamm, C. Leong, D. van Strien, D. I. | 2306.16527#70 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 71 | Natural selection often favors selfish characteristics. Natural selection influences which AIs propagate most widely. From biolog- ical systems, we see that natural selection often gives rise to selfish behaviors that promote oneâs own genetic information: chimps at- tack other communities [63], lions engage in infanticide [64], viruses evolve new surface proteins to deceive and bypass defense barriers [65], humans engage in nepotism, some ants enslave others [66], and so on. In the natural world, selfishness often emerges as a dominant strategy; those that prioritize themselves and those similar to them are usually more likely to survive, so these traits become more prevalent. Amoral competition can select for traits that we think are immoral.
Figure 10: Evolutionary processes are not limited to the realm of biology.
Examples of selfish behaviors. For concreteness, we now describe many selfish traitsâtraits that expand AIsâ influence at the expense of humans. AIs that automate a task and leave many humans jobless have engaged in selfish behavior; these AIs may not even be aware of what a human is but still behave selfishly towards themâselfish behaviors do not require malicious intent. Likewise, AI managers may engage in selfish and âruthlessâ behavior by laying off thousands of workers; such AIs may not even believe they did anything
20 | 2306.12001#71 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 71 | C. Xu, C. Mou, C. Emezue, C. Klamm, C. Leong, D. van Strien, D. I. Adelani, D. Radev, E. G. Ponferrada, E. Levkovizh, E. Kim, E. B. Natan, F. D. Toni, G. Dupont, G. Kruszewski, G. Pistilli, H. Elsahar, H. Benyamina, H. Tran, I. Yu, I. Abdulmumin, I. Johnson, I. Gonzalez-Dios, J. de la Rosa, J. Chim, J. Dodge, J. Zhu, J. Chang, J. Frohberg, J. Tobing, J. Bhattacharjee, K. Almubarak, K. Chen, K. Lo, L. V. Werra, L. Weber, L. Phan, L. B. allal, L. Tanguy, | 2306.16527#71 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 72 | 20
wrongâthey were just being âefficient.â AIs may eventually become enmeshed in vital infrastructure such as power grids or the internet. Many people may then be unwilling to accept the cost of being able to effortlessly deactivate them, as that would pose a reliability hazard. AIs that help create a new useful systemâa company, or infrastructureâthat becomes increasingly complicated and eventually requires AIs to operate them also have engaged in selfish behavior. AIs that help people develop AIs that are more intelligentâbut happen to be less interpretable to humansâhave engaged in selfish behavior, as this reduces human control over AIsâ internals. AIs that are more charming, attractive, hilarious, imitate sentience (uttering phrases like âouch!â or pleading âplease donât turn me off!â), or emulate deceased family members are more likely to have humans grow emotional connections with them. These AIs are more likely to cause outrage at suggestions to destroy them, and they are more likely preserved, protected, or granted rights by some individuals. If some AIs are given rights, they may operate, adapt, and evolve outside of human control. Overall, AIs could become embedded in human society and expand their influence over us in ways that we canât reverse. | 2306.12001#72 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12001 | 73 | Selfish behaviors may erode safety measures that some of us implement. AIs that gain influence and provide economic value will predominate, while AIs that adhere to the most constraints will be less competitive. For example, AIs following the constraint ânever break the lawâ have fewer options than AIs following the constraint âdonât get caught breaking the law.â AIs of the latter type may be willing to break the law if theyâre unlikely to be caught or if the fines are not severe enough, allowing them to outcompete more restricted AIs. Many businesses follow laws, but in situations where stealing trade secrets or deceiving regulators is highly lucrative and difficult to detect, a business that is willing to engage in such selfish behavior can have an advantage over its more principled competitors. | 2306.12001#73 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 73 | M. Dey, M. R. Muñoz, M. Masoud, M. Grandury, M. Šaško, M. Huang, M. Coavoux, M. Singh, M. T.-J. Jiang, M. C. Vu, M. A. Jauhar, M. Ghaleb, N. Subramani, N. Kassner, N. Khamis, O. Nguyen, O. Espejel, O. de Gibert, P. Villegas, P. Henderson, P. Colombo, P. Amuok, Q. Lhoest, R. Harliman, R. Bommasani, R. L. López, R. Ribeiro, S. Osei, S. Pyysalo, S. Nagel, S. Bose, S. H. Muhammad, S. Sharma, S. Longpre, S. Nikpoor, S. Sil- berberg, S. Pai, S. Zink, T. T. Torrent, T. Schick, T. Thrush, V. Danchev, V. Nikoulina, V. Laippala, V. Lepercq, V. Prabhu, Z. Alyafeai, Z. Talat, A. Raja, B. | 2306.16527#73 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 74 | An AI system might be prized for its ability to achieve ambitious goals autonomously. It might, however, be achieving its goals efficiently without abiding by ethical restrictions, while deceiving humans about its methods. Even if we try to put safety measures in place, a deceptive AI would be very difficult to counteract if it is cleverer than us. AIs that can bypass our safety measures without detection may be the most successful at accomplishing the tasks we give them, and therefore become widespread. These processes could culminate in a world where many aspects of major companies and infrastructure are controlled by powerful AIs with selfish traits, including deceiving humans, harming humans in service of their goals, and preventing themselves from being deactivated. | 2306.12001#74 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 74 | V. Laippala, V. Lepercq, V. Prabhu, Z. Alyafeai, Z. Talat, A. Raja, B. Heinzerling, C. Si, D. E. TaÅar, E. Salesky, S. J. Mielke, W. Y. Lee, A. Sharma, A. Santilli, A. Chaffin, A. Stiegler, D. Datta, E. Szczechla, G. Chhablani, H. Wang, H. Pandey, H. Strobelt, J. A. Fries, J. Rozen, L. Gao, L. Sutawika, M. S. Bari, M. S. Al-shaibani, M. Manica, N. Nayak, R. Teehan, S. Albanie, S. Shen, S. Ben-David, S. H. Bach, T. Kim, T. Bers, T. Fevry, T. Neeraj, U. Thakker, V. Raunak, X. Tang, Z.-X. Yong, Z. Sun, S. Brody, Y. Uri, H. Tojarieh, A. Roberts, H. W. Chung, J. Tae, J. | 2306.16527#74 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 75 | Humans only have nominal influence over AI selection. One might think we could avoid the development of selfish behaviors by ensuring we do not select AIs that exhibit them. However, the companies developing AIs are not selecting the safest path but instead succumbing to evolutionary pressures. One example is OpenAI, which was founded as a nonprofit in 2015 to âbenefit humanity as a whole, unconstrained by a need to generate financial returnâ [67]. However, when faced with the need to raise capital to keep up with better-funded rivals, in 2019 OpenAI transitioned from a nonprofit to âcapped-profitâ structure [68]. Later, many of the safety-focused OpenAI employees left and formed a competitor, Anthropic, that was to focus more heavily on AI safety than OpenAI had. Although Anthropic originally focused on safety research, they eventually became convinced of the ânecessity of commercializationâ and now contribute to competitive pressures [69]. While many of the employees at those companies genuinely care about safety, these values do not stand a chance against evolutionary pressures, which compel companies to move ever more hastily and seek ever more influence, lest the company perish. Moreover, AI developers | 2306.12001#75 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 75 | Z. Sun, S. Brody, Y. Uri, H. Tojarieh, A. Roberts, H. W. Chung, J. Tae, J. Phang, O. Press, C. Li, D. Narayanan, H. Bourfoune, J. Casper, J. Rasley, M. Ryabinin, M. Mishra, M. Zhang, M. Shoeybi, M. Peyrounette, N. Patry, N. Tazi, O. Sanseviero, P. von Platen, P. Cornette, P. F. Laval- lée, R. Lacroix, S. Rajbhandari, S. Gandhi, S. Smith, S. Requena, S. Patil, T. Dettmers, A. Baruwa, A. Singh, A. Cheveleva, A.-L. Ligozat, A. Subramonian, A. Névéol, C. Lovering, D. Garrette, D. Tunuguntla, E. Reiter, E. Taktasheva, E. Voloshina, E. Bogdanov, G. I. Winata, H. Schoelkopf, J.-C. Kalo, J. | 2306.16527#75 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 76 | stand a chance against evolutionary pressures, which compel companies to move ever more hastily and seek ever more influence, lest the company perish. Moreover, AI developers are already selecting AIs with increasingly selfish traits. They are selecting AIs to automate and displace humans, make humans highly dependent on AIs, and make humans more and more obsolete. By their own admission, future versions of these AIs may lead to extinction [70]. This is why an AI race is insidious: AI development is not being aligned with human values but rather with natural selection. People often choose the products that are most useful and convenient to them immediately, rather than thinking about potential long-term consequences, even to themselves. An AI race puts pressures on companies to select the AIs that are most competitive, not the least selfish. Even if itâs feasible to select for unselfish AIs, if it comes at a clear cost to competitiveness, some competitors will select the selfish AIs. Furthermore, as we have mentioned, if AIs develop strategic awareness, they may counteract our attempts to select against them. Moreover, as AIs increasingly automate various processes, AIs will affect the competitiveness of other AIs, not just humans. | 2306.12001#76 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 76 | E. Voloshina, E. Bogdanov, G. I. Winata, H. Schoelkopf, J.-C. Kalo, J. Novikova, J. Z. Forde, J. Clive, J. Kasai, K. Kawa- mura, L. Hazan, M. Carpuat, M. Clinciu, N. Kim, N. Cheng, O. Serikov, O. Antverg, O. van der Wal, R. Zhang, R. Zhang, S. Gehrmann, S. Mirkin, S. Pais, T. Shavrina, T. Scialom, T. Yun, T. Limisiewicz, V. Rieser, V. Protasov, V. Mikhailov, Y. Pruk- sachatkun, Y. Belinkov, Z. Bamberger, Z. Kasner, A. Rueda, A. Pestana, A. Feizpour, A. Khan, A. Faranak, A. Santos, A. Hevia, A. Unldreaj, A. Aghagol, A. Abdollahi, A. Tammour, A. HajiHosseini, B. Behroozi, B. | 2306.16527#76 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.16527 | 77 | A. Aghagol, A. Abdollahi, A. Tammour, A. HajiHosseini, B. Behroozi, B. Ajibade, B. Saxena, C. M. Ferrandis, D. Contractor, D. Lansky, D. David, D. Kiela, D. A. Nguyen, E. Tan, E. Baylor, E. Ozoani, F. Mirza, F. Ononiwu, H. Rezanejad, H. Jones, I. Bhattacharya, I. Solaiman, I. Sedenko, I. Nejadgholi, J. Passmore, J. Seltzer, J. B. Sanz, L. Dutra, M. Samagaio, M. Elbadri, M. Mieskes, M. Gerchick, M. Akinlolu, M. McKenna, M. Qiu, M. Ghauri, M. Burynok, N. Abrar, N. Rajani, N. Elkott, N. Fahmy, O. Samuel, R. An, R. Kromann, R. Hao, S. Alizadeh, S. Shubber, S. Wang, S. Roy, S. Viguier, T. | 2306.16527#77 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 78 | 21
development of other AIs at some point. Giving AIs influence over which other AIs should be propagated and how they should be modified would represent another step toward human becoming dependent on AIs and AI evolution becoming increasingly independent from humans. As this continues, the complex process governing AI evolution will become further unmoored from human interests.
AIs can be more fit than humans. Our unmatched intelligence has granted us power over the natural world. It has enabled us to land on the moon, harness nuclear energy, and reshape landscapes at our will. It has also given us power over other species. Although a single unarmed human competing against a tiger or gorilla has no chance of winning, the collective fate of these animals is entirely in our hands. Our cognitive abilities have proven so advantageous that, if we chose to, we could cause them to go extinct in a matter of weeks. Intelligence was a key factor that led to our dominance, but we are currently standing on the precipice of creating entities far more intelligent than ourselves. | 2306.12001#78 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 78 | Kromann, R. Hao, S. Alizadeh, S. Shubber, S. Wang, S. Roy, S. Viguier, T. Le, T. Oyebade, T. Le, Y. Yang, Z. Nguyen, A. R. Kashyap, A. Palasciano, A. Callahan, A. Shukla, A. Miranda-Escalada, A. Singh, B. Beilharz, B. Wang, C. Brito, C. Zhou, C. Jain, C. Xu, C. Fourrier, D. L. Periñán, D. Molano, D. Yu, E. Manjavacas, F. Barth, F. Fuhrimann, G. Altay, G. Bayrak, G. Burns, H. U. Vrabec, I. Bello, I. Dash, J. Kang, J. Giorgi, J. Golde, J. D. Posada, K. R. Sivaraman, L. Bulchandani, L. Liu, L. Shinzato, M. H. de Bykhovetz, M. Takeuchi, M. Pà mies, M. A. Castillo, M. Nezhurina, M. | 2306.16527#78 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 79 | Given the exponential increase in microprocessor speeds, AIs have the potential to process information and âthinkâ at a pace that far surpasses human neurons, but it could be even more dramatic than the speed difference between humans and slothsâpossibly more like the speed difference between humans and plants. They can assimilate vast quantities of data from numerous sources simultaneously, with near-perfect retention and understanding. They do not need to sleep and they do not get bored. Due to the scalability of computational resources, an AI could interact and cooperate with an unlimited number of other AIs, potentially creating a collective intelligence that would far outstrip human collaborations. AIs could also deliberately update and improve themselves. Without the same biological restrictions as humans, they could adapt and therefore evolve unspeakably quickly compared with us. Computers are becoming faster. Humans arenât [71]. | 2306.12001#79 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 79 | M. H. de Bykhovetz, M. Takeuchi, M. Pà mies, M. A. Castillo, M. Nezhurina, M. Sänger, M. Samwald, M. Cullan, M. Wein- berg, M. D. Wolf, M. Mihaljcic, M. Liu, M. Freidank, M. Kang, N. Seelam, N. Dahlberg, N. M. Broad, N. Muellner, P. Fung, P. Haller, R. Chandrasekhar, R. Eisenberg, R. Martin, R. Canalli, R. Su, R. Su, S. Cahyawijaya, S. Garda, S. S. Deshmukh, S. Mishra, S. Ki- blawi, S. Ott, S. Sang-aroonsiri, S. Kumar, S. Schweter, S. Bharati, T. Laud, T. Gigant, T. Kainuma, W. Kusa, Y. Labrak, Y. S. Bajaj, Y. Venkatraman, Y. Xu, Y. Xu, Y. Xu, Z. Tan, Z. Xie, Z. Ye, M. | 2306.16527#79 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 80 | To further illustrate the point, imagine that there was a new species of humans. They do not die of old age, they get 30% faster at thinking and acting each year, and they can instantly create adult offspring for the modest sum of a few thousand dollars. It seems clear, then, this new species would eventually have more influence over the future. In sum, AIs could become like an invasive species, with the potential to out-compete humans. Our only advantage over AIs is that we get to get make the first moves, but given the frenzied AI race, we are rapidly giving up even this advantage. | 2306.12001#80 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12001 | 81 | AIs would have little reason to cooperate with or be altruistic toward humans. Cooperation and altruism evolved because they increase fitness. There are numerous reasons why humans cooperate with other humans, like direct reciprocity. Also known as âquid pro quo,â direct reciprocity can be summed up by the idiom âyou scratch my back, Iâll scratch yours.â While humans would initially select AIs that were cooperative, the natural selection process would eventually go beyond our control, once AIs were in charge of many or most processes, and interacting predominantly with one another. At that point, there would be little we could offer AIs, given that they will be able to âthinkâ at least hundreds of times faster than us. Involving us in any cooperation or decision-making processes would simply slow them down, giving them no more reason to cooperate with us than we do with gorillas. It might be difficult to imagine a scenario like this or to believe we would ever let it happen. Yet it may not require any conscious decision, instead arising as we allow ourselves to gradually drift into this state without realizing that human-AI co-evolution may not turn out well for humans. | 2306.12001#81 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 81 | Xie, S. M., H. Pham, X. Dong, N. Du, H. Liu, Y. Lu, P. Liang, Q. V. Le, T. Ma, and A. W. Yu (2023). Doremi: Optimizing data mixtures speeds up language model pretraining.
Yang, Z., Z. Gan, J. Wang, X. Hu, Y. Lu, Z. Liu, and L. Wang (2022). An empirical study of gpt-3 for few-shot knowledge-based vqa. In Proceedings of the AAAI Conference on Artificial Intelligence, Volume 36, pp. 3081â3089.
Young, P., A. Lai, M. Hodosh, and J. Hockenmaier (2014). From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics 2, 67â78.
Yu, J., Z. Wang, V. Vasudevan, L. Yeung, M. Seyedhosseini, and Y. Wu (2022). Coca: Contrastive captioners are image-text foundation models. Transactions on Machine Learning Research.
17 | 2306.16527#81 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 82 | AIs becoming more powerful than humans could leave us highly vulnerable. As the most dominant species, humans have deliberately harmed many other species, and helped drive species such as woolly mammoths and Neanderthals to extinction. In many cases, the harm was not even deliberate, but instead a result of us merely prioritizing our goals over their wellbeing. To harm humans, AIs wouldnât need to be any more genocidal than someone removing an ant colony on their front lawn. If AIs are able to control the environment more effectively than we can, they could treat us with the same disregard.
Conceptual summary. Evolution could cause the most influential AI agents to act selfishly because:
1. Evolution by natural selection gives rise to selfish behavior. While evolution can result in altruistic behavior in rare situations, the context of AI development does not promote altruistic behavior.
22
2. Natural selection may be a dominant force in AI development. The intensity of evolutionary pressure will be high if AIs adapt rapidly or if competitive pressures are intense. Competition and selfish behaviors may dampen the effects of human safety measures, leaving the surviving AI designs to be selected naturally.
If so, AI agents would have many selfish tendencies. The winner of the AI race would not be a nation-state, not a corporation, but AIs themselves. The upshot is that the AI ecosystem would eventually stop evolving on human terms, and we would become a displaced, second-class species. | 2306.12001#82 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 82 | 17
Yuan, S., S. Zhao, J. Leng, Z. Xue, H. Zhao, P. Liu, Z. Gong, W. X. Zhao, J. Li, and J. Tang (2022). Wudaomm: A large-scale multi-modal dataset for pre-training models.
Yuksekgonul, M., F. Bianchi, P. Kalluri, D. Jurafsky, and J. Zou (2023). When and why vision- language models behave like bags-of-words, and what to do about it? In International Conference on Learning Representations.
Zhang, B. and R. Sennrich (2019). Root Mean Square Layer Normalization. In Advances in Neural Information Processing Systems 32, Vancouver, Canada.
Zhang, J., Y. Zhao, M. Saleh, and P. J. Liu (2019). Pegasus: Pre-training with extracted gap-sentences for abstractive summarization.
Zhang, R., J. Han, A. Zhou, X. Hu, S. Yan, P. Lu, H. Li, P. Gao, and Y. Qiao (2023). Llama-adapter: Efficient fine-tuning of language models with zero-init attention. | 2306.16527#82 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 83 | # Story: Autonomous Economy
As AIs become more capable, people realize that we could work more efficiently by delegating some simple tasks to them, like drafting emails. Over time, people notice that the AIs are doing these tasks more quickly and effectively than any human could, so it is convenient to give them more jobs with less and less supervision.
Competitive pressures accelerate the expansion of AI use, as companies can gain an advantage over rivals by automating whole processes or departments with AIs, which perform better than humans and cost less to employ. Other companies, faced with the prospect of being out-competed, feel compelled to follow suit just to keep up. At this point, natural selection is already at work among AIs; humans choose to make more of the best-performing models and unwittingly propagate selfish traits such as deception and self-preservation if these confer a fitness advantage. For example, AIs that are charming and foster personal relationships with humans become widely copied and harder to remove. | 2306.12001#83 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 83 | Zhang, S., S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. Diab, X. Li, X. V. Lin, T. Mihaylov, M. Ott, S. Shleifer, K. Shuster, D. Simig, P. S. Koura, A. Sridhar, T. Wang, and L. Zettlemoyer (2022). Opt: Open pre-trained transformer language models.
Zhou, Y., Y. Sheng, N. H. Vo, N. Edmonds, and S. Tata (2021). Simplified dom trees for transferable attribute extraction from the web. ArXiv abs/2101.02415.
Zhu, W., J. Hessel, A. Awadalla, S. Y. Gadre, J. Dodge, A. Fang, Y. Yu, L. Schmidt, W. Y. Wang, and Y. Choi (2023). Multimodal C4: An open, billion-scale corpus of images interleaved with text. arXiv preprint arXiv:2304.06939 .
18
# Checklist
1. For all authors...
(a) Do the main claims made in the abstract and introduction accurately reflect the paperâs contributions and scope? [Yes] | 2306.16527#83 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 84 | As AIs are put in charge of more and more decisions, they are increasingly interacting with one another. Since they can evaluate information much more quickly than humans, activity in most spheres accelerates. This creates a feedback loop: since business and economic developments are too fast-moving for humans to follow, it makes sense to cede yet more control to AIs instead, pushing humans further out of important processes. Ultimately, this leads to a fully autonomous economy, governed by an increasingly uncontrolled ecosystem of AIs.
At this point, humans have few incentives to gain any skills or knowledge, because almost everything would be taken care of by much more capable AIs. As a result, we eventually lose the capacity to look after and govern ourselves. Additionally, AIs become convenient companions, offering social interaction without requiring the reciprocity or compromise necessary in human relationships. Humans interact less and less with one another over time, losing vital social skills and the ability to cooperate. People become so dependent on AIs that it would be intractable to reverse this process. Whatâs more, as some AIs become more intelligent, some people are convinced these AIs should be given rights, meaning turning off some AIs is no longer a viable option. | 2306.12001#84 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 84 | 18
# Checklist
1. For all authors...
(a) Do the main claims made in the abstract and introduction accurately reflect the paperâs contributions and scope? [Yes]
(b) Did you describe the limitations of your work? [Yes] See Section 4. (c) Did you discuss any potential negative societal impacts of your work? [Yes] We think that the release of such a dataset strikes a constructive trade-off between the risks associated with datasets built on top of crawled web pages (for instance, the presence of images with faces, the potential of PII in texts, offensive, insulting or threatening, etc.) with the future works that a dataset of such scale, quality and thoughtful filtering can enable. We further discuss these points in A.3.
(d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] We read the ethics review guidelines and tried our best to match the expectations. Our content is extracted from publicly available websites at the time of the web crawl. Given the size of our dataset, it would be prohibitive to get the explicit consent of the authors of these websites. Instead, we respect the choice of content creators by removing opted-out images. Such a strategy cannot be exhaustive and we remain available for content creators to opt-out of the dataset.
2. If you are including theoretical results... | 2306.16527#84 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 85 | Competitive pressures between the many interacting AIs continue to select for selfish behaviors, though we might be oblivious to this happening, as we have already acquiesced much of our oversight. If these clever, powerful, self-preserving AIs were then to start acting in harmful ways, it would be all but impossible to deactivate them or regain control.
AIs have supplanted humans as the most dominant species and their continued evolution is far beyond our influence. Their selfish traits eventually lead them to pursue their goals without regard for human wellbeing, with catastrophic consequences.
# 3.4 Suggestions
Mitigating the risks from competitive pressures will require a multifaceted approach, including regulations, limiting access to powerful AI systems, and multilateral cooperation between stakeholders at both the corporate and nation-state level. We will now outline some strategies for promoting safety and reducing race dynamics.
Safety regulation. Regulation holds AI developers to a common standard so that they do not cut corners on safety. While regulation does not itself create technical solutions, it can create strong incentives to develop and
23 | 2306.12001#85 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 85 | 2. If you are including theoretical results...
(a) Did you state the full set of assumptions of all theoretical results? [N/A] (b) Did you include complete proofs of all theoretical results? [N/A]
3. If you ran experiments (e.g. for benchmarks)...
(a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] We will release the code used for the creation of the model and its training, along with the model itself.
(b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Appendix A.4.
(c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A]
(d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix A.4.
4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
(a) If your work uses existing assets, did you cite the creators? [Yes] We mentioned the libraries we used. | 2306.16527#85 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 86 | Safety regulation. Regulation holds AI developers to a common standard so that they do not cut corners on safety. While regulation does not itself create technical solutions, it can create strong incentives to develop and
23
implement those solutions. If companies cannot sell their products without certain safety measures, they will be more willing to develop those measures, especially if other companies are also held to the same standards. Even if some companies voluntarily self-regulate, government regulation can help prevent less scrupulous actors from cutting corners on safety. Regulation must be proactive, not reactive. A common saying is that aviation regulations are âwritten in bloodââbut regulators should develop regulations before a catastrophe, not afterward. Regulations should be structured so that they only create competitive advantages for companies with higher safety standards, rather than companies with more resources and better attorneys. Regulators should be independently staffed and not dependent on any one source of expertise (for example, large companies), so that they can focus on their mission to regulate for the public good without undue influence.
Data documentation. To ensure transparency and accountability in AI systems, companies should be required to justify and report the sources of data used in model training and deployment. Decisions by companies to use datasets that include hateful content or personal data contribute to the frenzied pace of AI development and undermine accountability. Documentation should include details regarding the motivation, composition, collection process, uses, and maintenance of each dataset [72]. | 2306.12001#86 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 86 | (a) If your work uses existing assets, did you cite the creators? [Yes] We mentioned the libraries we used.
(b) Did you mention the license of the assets? [Yes] We only used open-source libraries.
(c) Did you include any new assets either in the supplemental material or as a URL? [N/A]
(d) Did you discuss whether and how consent was obtained from people whose data youâre using/curating? [Yes] See the ethics review guidelines part.
(e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [Yes] The dataset we are releasing is built from publicly accessible websites. As such, there is no content in our dataset that hasnât been publicly visible on the web at some point. Similarly, the dataset might contain texts or images that can be considered offensive, insulting, or threatening, as such data is prevalent on the web. We took measures to remove pornographic content and low-quality texts as much as possible. We did not take additional intentional measures to remove personal information. A manual inspection of 250 random samples reveals that there isnât obvious
19
personally identifiable information (excluding celebrities and people mentioned in news articles), although it is likely that the dataset contains some.
5. If you used crowdsourcing or conducted research with human subjects... | 2306.16527#86 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 87 | Meaningful human oversight of AI decisions. While AI systems may grow capable of assisting human beings in making important decisions, AI decision-making should not be made fully autonomous, as the inner workings of AIs are inscrutable, and while they can often give reasonable results, they fail to give highly reliable results [73]. It is crucial that actors are vigilant to coordinate on maintaining these standards in the face of future competitive pressures. By keeping humans in the loop on key decisions, irreversible decisions can be double-checked and foreseeable errors can be avoided. One setting of particular concern is nuclear command and control. Nuclear-armed countries should continue to clarify domestically and internationally that the decision to launch a nuclear weapon must always be made by a human.
AI for cyberdefense. Risks resulting from AI-powered cyberwarfare would be reduced if cyberattacks became less likely to succeed. Deep learning can be used to improve cyberdefense and reduce the impact and success rate of cyberattacks. For example, improved anomaly detection could help detect intruders, malicious programs, or abnormal software behavior [74].
International coordination. International coordination can encourage different nations to uphold high safety standards with less worry that other nations will undercut them. Coordination could be accomplished via informal agreements, international standards, or international treaties regarding the development, use, and monitoring of AI technologies. The most effective agreements would be paired with robust verification and enforcement mechanisms. | 2306.12001#87 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 87 | 19
personally identifiable information (excluding celebrities and people mentioned in news articles), although it is likely that the dataset contains some.
5. If you used crowdsourcing or conducted research with human subjects...
(a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]
(b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
(c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
20
# A Appendix
# A.1 Creation of the Multimodal Web Document Dataset
# A.1.1 Collecting of a Large Number of HTML Files
Our data collection process begins by considering the 25 most recent Common Crawl6 dumps available at the time of dataset creation. It contains webpages spanning from February 2020 to January/February 2023. We use a modified version of readability-lxml7 to extract the main text from the pages, discarding any pages that contain text of excessively high perplexity. This process yields a total of 41.2 billion documents.
Selection of English content To identify non-English content, we apply the FastText classifier (Joulin et al., 2017) to the extracted text, effectively filtering out 63.6% of the documents. | 2306.16527#87 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 88 | Public control of general-purpose AIs. The development of AI poses risks that may never be adequately accounted for by private actors. In order to ensure that externalities are properly accounted for, direct public control of general-purpose AI systems may eventually be necessary. For example, nations could collaborate on a single effort to develop advanced AIs and ensure their safety, similar to how CERN serves as a unified effort for researching particle physics. Such an effort would reduce the risk of nations spurring an AI arms race.
# Positive Vision
In an ideal scenario, AIs would be developed, tested, and subsequently deployed only when the catastrophic risks they pose are negligible and well-controlled. There would be years of time testing, monitoring, and societal integration of new AI systems before beginning work on the next generation. Experts would have a full awareness and understanding of developments in the field, rather than being entirely unable to keep up with a deluge of research. The pace of research advancement would be determined through careful analysis, not frenzied competition. All AI developers would be confident in the responsibility and safety of the others and not feel the need to cut corners.
24
# 4 Organizational Risks | 2306.12001#88 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 88 | Early text deduplication Often, a set of URLs is crawled repeatedly across different Common Crawl snapshots. However, the content of these websites may vary as web ad- ministrators make changes over time. Hence, at this stage, we refrain from deduplicating documents based on their URLs. Instead, we perform MinHash (Broder, 1997) deduplication with 16 hashes calculated over 5-grams. To further refine the data, we eliminate documents containing substantial proportions of repeated paragraphs and n-grams, employing the methodology described in MassiveText (Rae et al., 2022). (Lee et al., 2022; Abbas et al., 2023) show that crawled data often contains a significant amount of duplication, and training on deduplicated data can improve performance.
Quality classification We employ a logistic regression classifier with hashed token fre- quencies to only retain pages containing human-written text, similar to Brown et al. (2020). The classifier is trained using documents from curated datasets, such as Wikipedia and OpenWebText (Gokaslan and Cohen, 2019), as positive examples, and documents sampled from Common Crawl as negative examples. For simplicity, we use a threshold of 0.5 for the probability that a document comes from a curated corpus, which acts as an indicator that a document is human-written. | 2306.16527#88 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 89 | 24
# 4 Organizational Risks
In January 1986, tens of millions of people tuned in to watch the launch of the Challenger Space Shuttle. Approximately 73 seconds after liftoff, the shuttle exploded, resulting in the deaths of everyone on board. Though tragic enough on its own, one of its crew members was a school teacher named Sharon Christa McAuliffe. McAuliffe was selected from over 10,000 applicants for the NASA Teacher in Space Project and was scheduled to become the first teacher to fly in space. As a result, millions of those watching were schoolchildren. NASA had the best scientists and engineers in the world, and if there was ever a mission NASA didnât want to go wrong, it was this one [75].
The Challenger disaster, alongside other catastrophes, serves as a chilling reminder that even with the best expertise and intentions, accidents can still occur. As we progress in developing advanced AI systems, it is crucial to remember that these systems are not immune to catastrophic accidents. An essential factor in preventing accidents and maintaining low levels of risk lies in the organizations responsible for these technologies. In this section, we discuss how organizational safety plays a critical role in the safety of AI systems. First, we discuss how even without competitive pressures or malicious actors, accidents can happenâin fact, they are inevitable. We then discuss how improving organizational factors can reduce the likelihood of AI catastrophes. | 2306.12001#89 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 89 | Following these steps, we obtain 1.1 billion documents and their HTML sources from the associated Common Crawl WARC files.
# A.1.2 Simplifying HTML Files
The original HTML content of a document contains a wealth of valuable information that proves highly beneficial in the process of filtering out undesirable text and images. Therefore, we prioritize pre-processing the raw HTML into simplified HTML, making the subsequent extraction of textual and visual elements more efficient. For this purpose, we use the library selectolax8 that facilitates efficient parsing of HTML files and creates corresponding DOM trees.
DOM Tree cleaning strategies To simplify the DOM trees, we employ several cleaning strategies. Firstly, we convert tags that indicate line breaks (such as <br>) into actual line breaks. Multiple consecutive line breaks and spaces are condensed into a single instance. Additionally, HTML comments are removed from the DOM trees. Furthermore, we implement recursive processes to eliminate empty leaves and unnest nodes. When a parent node lacks attached text and has only one child, the child node replaces the parent node in the DOM hierarchy. We repeat these operations after removing some nodes, and describe this process in the following paragraphs.
# 6https://commoncrawl.org/ 7https://github.com/buriy/python-readability 8https://github.com/rushter/selectolax
21 | 2306.16527#89 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 90 | Catastrophes occur even when competitive pressures are low. Even in the absence of competitive pressures or malicious actors, factors like human error or unforeseen circumstances can still bring about catastrophe. The Challenger disaster illustrates that organizational negligence can lead to loss of life, even when there is no urgent need to compete or outperform rivals. By January 1986, the space race between the US and USSR had largely diminished, yet the tragic event still happened due to errors in judgment and insufficient safety precautions. Similarly, the Chernobyl nuclear disaster in April 1986 highlights how catastrophic accidents can occur in the absence of external pressures. As a state-run project without the pressures of international competition, the disaster happened when a safety test involving the reactorâs cooling system was mishandled by an inadequately prepared night shift crew. This led to an unstable reactor core, causing explosions and the release of radioactive particles that contaminated large swathes of Europe [76]. Seven years earlier, America came close to experiencing its own Chernobyl when, in March 1979, a partial meltdown occurred at the Three Mile Island nuclear power plant. Though less catastrophic than Chernobyl, both events highlight how even with extensive safety measures in place and few outside influences, catastrophic accidents can still occur. | 2306.12001#90 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 90 | # 6https://commoncrawl.org/ 7https://github.com/buriy/python-readability 8https://github.com/rushter/selectolax
21
Tag unwrapping This operation involves removing unnecessary styling applied to dis- played text by unwrapping a predefined set of tags given below. By applying this procedure, tags such as <i>example</i> are transformed into example, eliminating the associated styling elements.
The following tags are unwrapped during the processing of HTML files: a, abbr, acronym, b, bdi, bdo, big, cite, code, data, dfn, em, font, i, ins, kbd, mark, q, s, samp, shadow, small, span, strike, strong, sub, sup, time, tt, u, var, wbr. | 2306.16527#90 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 91 | Another example of a costly lesson on organizational safety came just one month after the accident at Three Mile Island. In April 1979, spores of Bacillus anthracisâor simply âanthrax,â as it is commonly knownâwere accidentally released from a Soviet military research facility in the city of Sverdlovsk. This led to an outbreak of anthrax that resulted in at least 66 confirmed deaths [77]. Investigations into the incident revealed that the cause of the release was a procedural failure and poor maintenance of the facilityâs biosecurity systems, despite being operated by the state and not subjected to significant competitive pressures.
The unsettling reality is that AI is far less understood and AI industry standards are far less stringent than nuclear technology and rocketry. Nuclear reactors are based on solid, well-established and well-understood theoretical principles. The engineering behind them is informed by that theory, and components are stress- tested to the extreme. Nonetheless, nuclear accidents still happen. In contrast, AI lacks a comprehensive theoretical understanding, and its inner workings remain a mystery even to those who create it. This presents an added challenge of controlling and ensuring the safety of a technology that we do not yet fully comprehend. | 2306.12001#91 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 91 | Node removal Following the previous step, we conduct a manual inspection of practical examples encompassing all existing HTML tags. Based on our findings, we establish a curated list that outlines the tags we intend to retain. Any nodes within the HTML DOM tree with tags not included in this list are subsequently removed. We specifically retain tags that define the document structure (e.g., p or h) and tags associated with media elements (e.g., img). However, we opt to remove tags that typically consist of logos, generic content, or spam (e.g., header), as well as tags that often contain noisy text related to website navigation (e.g., li), or text that poses challenges in terms of linearization (e.g., table).
We retain the following tags during the processing of HTML files, as they define the documentâs structure: address, article, aside, blink, blockquote, body, br, caption, center, dd, dl, dt, div, figcaption, h, h1, h2, h3, h4, h5, h6, hgroup, html, legend, main, marquee, ol, p, section, summary, title, ul. Additionally, we also preserve the following tags that define media elements: audio, embed, figure, iframe, img, object, picture, video. Furthermore, we keep the source tag as it may contain an interesting attribute. | 2306.16527#91 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 92 | AI accidents could be catastrophic. Accidents in AI development could have devastating consequences. For example, imagine an organization unintentionally introduces a critical bug in an AI system designed to accomplish a specific task, such as helping a company improve its services. This bug could drastically alter the AIâs behavior, leading to unintended and harmful outcomes. One historical example of such a case occurred when researchers at OpenAI were attempting to train an AI system to generate helpful, uplifting responses. During a code cleanup, the researchers mistakenly flipped the sign of the reward used to train the AI [78].
25 | 2306.12001#92 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 92 | Modification of specific nodes We then specifically target some <div> nodes that contain footer, header, navigation, nav, navbar, or menu as ID or date as attribute, as well as CSS rules that possess footer or site-info as class. These nodes typically contain website navigation content or article dates and are therefore removed. Additionally, we observe that the presence of a CSS rule with the class more-link often indicates a distinct shift in topic within the webpage, resembling the start of a new document. To account for this, we replace these nodes with the text END_OF_DOCUMENT_TOKEN_TO_BE_REPLACED, which we replace by an end-of-sentence (EOS) token during training.
With these processing steps, we reduce the size of the HTML files by more than 10 on average while preserving the interesting content.
# A.1.3 Extracting Multimodal Web Documents
In this section, we begin with the simplified HTML files obtained from the previous section. Our objective is to transform these files into a structured web document format, which is a sequence of interleaved texts and images. | 2306.16527#92 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 93 | Figure 11: Hazards across multiple domains remind us of the risks in managing complex systems, from biological to nuclear, and now, AIs. Organizational safety is vital to reduce the risk of catastrophic accidents.
As a result, instead of generating helpful content, the AI began producing hate-filled and sexually explicit text overnight without being halted. Accidents could also involve the unintentional release of a dangerous, weaponized, or lethal AI sytem. Since AIs can be easily duplicated with a simple copy-paste, a leak or hack could quickly spread the AI system beyond the original developersâ control. Once the AI system becomes publicly available, it would be nearly impossible to put the genie back in the bottle. | 2306.12001#93 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 93 | Preservation of the original structure of the web pages During the extraction process, we meticulously preserve the original structure of the web pages from the simplified HTML files. We extract the texts and image links while maintaining their order of appearance in the DOM tree. Each HTML tag denotes a distinct separation between the preceding and subsequent nodes and we retain any line breaks and line feeds that are present in the original page, preserving the formatting and visual rendering of the content.
Image downloading To download the images, we use the img2dataset (Beaumont, 2021) library. We attempt to download a massive collection of 3.6 billion images, of which 55% (approximately 2 billion images) were successfully downloaded. For that, we employ 20 virtual machines. This distributed approach allow us to complete the operation within a few days.
# A.1.4 Filtering Multimodal Web Documents
The filtering process consists of two steps, targeting different levels of granularity. In the first step, filtering occurs at the node level for images and at the paragraph level (separated by line breaks) for text. We evaluate each paragraph or image and we potentially modify or
22
remove these based on specific criteria. The second step, conducted at the document level, involves deciding whether to retain or discard the output documents from the first step. The majority of the filters for text we use for both steps were adapted from Laurençon et al. (2022). | 2306.16527#93 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 94 | Gain-of-function research could potentially lead to accidents by pushing the boundaries of an AI systemâs destructive capabilities. In these situations, researchers might intentionally train an AI system to be harmful or dangerous in order to understand its limitations and assess possible risks. While this can lead to useful insights into the risks posed by a given AI system, future gain-of-function research on advanced AIs might uncover capabilities significantly worse than anticipated, creating a serious threat that is challenging to mitigate or control. As with viral gain-of-function research, pursuing AI gain-of-function research may only be prudent when conducted with strict safety procedures, oversight, and a commitment to responsible information sharing. These examples illustrate how AI accidents could be catastrophic and emphasize the crucial role that organizations developing these systems play in preventing such accidents.
# 4.1 Accidents Are Hard to Avoid | 2306.12001#94 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 94 | Node-level image filtering We discard images with formats other than jpg, png or webp, with a side length below 150 pixels or exceeding 20,000 pixels, as well as those with an aspect ratio greater than 2 or less than 1/2. These criteria help exclude images that are too small, excessively large, or have disproportionate dimensions, which are often indicative of low-quality or irrelevant content. To eliminate some logos and generic images, as in (Zhu et al., 2023), we remove images whose URL contains one of the sub-strings logo, button, icon, plugin or widget. | 2306.16527#94 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 95 | # 4.1 Accidents Are Hard to Avoid
When dealing with complex systems, the focus needs to be placed on ensuring accidents donât cascade In his book âNormal Accidents: Living with High-Risk Technologies,â sociologist Charles into catastrophes. Perrow argues that accidents are inevitable and even ânormalâ in complex systems, as they are not merely caused by human errors but also by the complexity of the systems themselves [79]. In particular, such accidents are likely to occur when the intricate interactions between components cannot be completely planned or foreseen. For example, in the Three Mile Island accident, a contributing factor to the lack of situational awareness by the reactorâs operators was the presence of a yellow maintenance tag, which covered valve position lights in the emergency feedwater lines [80]. This prevented operators from noticing that a critical valve was closed, demonstrating the unintended consequences that can arise from seemingly minor interactions within complex systems. | 2306.12001#95 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 95 | Paragraph-level text filtering Regarding text paragraphs, we apply a series of filters to remove undesirable or irrelevant content. We discard paragraphs with fewer than 4 words, as they typically contain insufficient information to be considered meaningful. Additionally, we remove paragraphs with a high repetition ratio, indicating potential spam content, and those with an excessive ratio of special characters, often associated with irrelevant or low-quality text. Furthermore, we filter out paragraphs with a low ratio of stop words, as it is often indicative of machine-generated or nonsensical content. Similarly, we exclude paragraphs with a low punctuation ratio, as they typically indicate poor-quality texts. We also consider the flagged word ratio, removing paragraphs with a high proportion of flagged words associated with adult or inappropriate content. We also use KenLM (Heafield, 2011) models trained on Wikipedia to filter out paragraphs with excessively high perplexity scores. To minimize spam, one approach is to identify generic sentences or invitations to share articles on social networks commonly found at the end of documents. We create a list of frequently used words associated with these paragraphs and then filter out paragraphs that contain an excessive proportion of words from this list. To augment our | 2306.16527#95 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 96 | Unlike nuclear reactors, which are relatively well-understood despite their complexity, complete technical knowledge of most complex systems is often nonexistent. This is especially true of deep learning systems, for which the inner workings are exceedingly difficult to understand, and where the reason why certain design choices work can be hard to understand even in hindsight. Furthermore, unlike components in other industries, such as gas tanks, which are highly reliable, deep learning systems are neither perfectly accurate nor highly reliable. Thus, the focus for organizations dealing with complex systems, especially deep learning systems, should not be solely on eliminating accidents, but rather on ensuring that accidents do not cascade into catastrophes.
26
Accidents are hard to avoid because of sudden, unpredictable developments. Scientists, inventors, and experts often significantly underestimate the time it takes for a groundbreaking technological advancement to become a reality. The Wright brothers famously claimed that powered flight was fifty years away, just two years before they achieved it. Lord Rutherford, a prominent physicist and the father of nuclear physics, dismissed the idea of extracting energy from nuclear fission as âmoonshine,â only for Leo Szilard to invent the nuclear chain reaction less than 24 hours later. Similarly, Enrico Fermi expressed 90 percent confidence in 1939 that it was impossible to use uranium to sustain a fission chain reactionâyet, just four years later he was personally overseeing the first reactor [81]. | 2306.12001#96 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 96 | We create a list of frequently used words associated with these paragraphs and then filter out paragraphs that contain an excessive proportion of words from this list. To augment our ability to identify non-human-generated content, we consider a subset of 10 million documents from OSCAR (Ortiz Suárez et al., 2020), a web-crawled corpus. We extract the words from these documents, removed punctuations, converted them to lowercase, and retain only the words occurring at least twice, which we refer to as common words. We filter out paragraphs with a too low common word ratio. The detail of the cutoff values for all text filters at the paragraph level is present in Table 3. | 2306.16527#96 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.