doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.12001 | 151 | Legal liability for cloud compute providers. Cloud compute providers should take steps to ensure that their platforms are not helping rogue AIs survive and spread. If we impose legal liabilities, cloud compute providers would be motivated to ensure that agents running on their hardware are safe. If providers find an unsafe agent on their server, they could hit the off switch for the portions of their systems used by rogue agents. We note that this intervention is limited in its effectiveness whenever the rogue AIs can easily manipulate or bypass the AI compute monitors. To strengthen this liability framework, we could imitate international agreements for cyberattacks, essentially creating a decentralized off-switch. This would allow for swift interventions if rogue AIs start spreading.
Support AI safety research. Many paths toward improved AI control require technical research. The following technical machine learning research areas aim to address problems of AI control. Each research area could be substantially advanced with an increase in focus and funding from from industry, private foundations, and government.
42 | 2306.12001#151 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 151 | company, business, companies, market, industry, in- vestment, investors, capital, tech, firm, ceo, based, technology, billion, businesses, group, million, fi- nancial, growth wwe, ring, wrestling, match, rick, randy, champion, title, wrestler, vince, show, fans, wrestlers, owens, tag, baker, triple, shane, raw, cody anime, musk, japanese, tesla, manga, series, elon, japan, ninja, episode, samurai, kai, characters, de- mon, karate, character, also, dragon, arc, tokyo brown, scotland, scottish, gordon, glasgow, celtic, perry, walker, murray, graham, letter, edinburgh, cover, campbell, watson, thomas, also, well, neil, henderson video, youtube, videos, live, watch, channel, stream- ing, audio, content, stream, channels, footage, shows, online, also, NUMk, recording, watching, clip, one church, catholic, pope, religious, christian, churches, bishop, francis, faith, holy, priest, saint, mass, vati- can, | 2306.16527#151 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 152 | 42
⢠Adversarial robustness of proxy models. AI systems are typically trained with reward or loss signals that imperfectly specify desired behavior. For example, AIs may exploit weaknesses in the oversight schemes used to train them. Increasingly, the systems providing oversight are AIs themselves. To reduce the chance that AI models will exploit defects in AIs providing oversight, research is needed in increasing the adversarial robustness of AI models providing oversight (âproxy modelsâ). Because oversight schemes and metrics may eventually be gamed, it is also important to be able to detect when this might be happening so the risk can be mitigated [131].
⢠Model honesty. AI systems may fail to accurately report their internal state [132, 133]. In the future, systems may deceive their operators in order to appear beneficial when they are actually very dangerous. Model honesty research aims to make model outputs conform to a modelâs internal âbeliefsâ as closely as possible. Research can identify techniques to understand a modelâs internal state or make its outputs more honest and more faithful to its internal state [134].
⢠Transparency and Representation Engineering. Deep learning models are notoriously difficult to understand. Better visibility into their inner workings would allow humans, and potentially other AI systems, to identify problems more quickly. Research can include analysis of small components [135, 136], or it can try to understand a networkâs high-level internal representations [134]. | 2306.12001#152 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 152 | one church, catholic, pope, religious, christian, churches, bishop, francis, faith, holy, priest, saint, mass, vati- can, religion, pastor, christ, parish, christians phone, apple, samsung, iphone, pro, smartphone, device, galaxy, camera, also, display, battery, new, sNUM, screen, NUMgb, phones, NUMg, android city, project, area, council, residents, community, park, town, street, public, local, cities, new, de- velopment, mayor, urban, construction, district, building social, culture, society, cultural, people, political, different, moral, identity, important, values, issues, often, public, role, many, way, community, under- standing, view smith, jack, tom, ben, adam, alex, kevin, richard, si- mon, holmes, billy, bell, oliver, harvey, jake, collins, burke, baldwin, joel, aaron fight, title, tennis, champion, ufc, round, world, boxing, fighter, one, win, open, martial, first, match, mma, fighters, fighting, career india, indian, state, delhi, government, | 2306.16527#152 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 153 | ⢠Detecting and removing hidden model functionality. Deep learning models may now or in the future contain dangerous functionality, such as the capacity for deception, Trojans [137, 138, 139], or biological engineering capabilities, that should be removed from those models. Research could focus on identifying and removing [140] these functionalities.
# Positive Vision
In an ideal scenario, we would have full confidence in the controllability of AI systems both now and in the future. Reliable mechanisms would be in place to ensure that AI systems do not act deceptively. There would be a strong understanding of AI system internals, sufficient to have knowledge of a systemâs tendencies and goals; these tools would allow us to avoid building systems that are deserving of moral consideration or rights. AI systems would be directed to promote a pluralistic set of diverse values, ensuring the enhancement of certain values doesnât lead to the total neglect of others. AI assistants could act as advisors, giving us ideal advice and helping us make better decisions according to our own values [141]. In general, AIs would improve social welfare and allow for corrections in cases of error or as human values naturally evolve.
# 6 Discussion of Connections Between Risks
So far, we have considered four sources of AI risk separately, but they also interact with each other in complex ways. We give some examples to illustrate how risks are connected. | 2306.12001#153 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 153 | world, boxing, fighter, one, win, open, martial, first, match, mma, fighters, fighting, career india, indian, state, delhi, government, also, min- ister, bjp, said, modi, singh, chief, congress, crore, pradesh, mumbai, gandhi, lakh, hindu war, world, battle, empire, british, army, history, german, peace, great, military, wars, end, conflict, power, two, land, forces, soldiers, fight www, map, sri, http, https, maps, lanka, com, atlas, derby, tamil, lankan, html, maria, angelo, tara, colombo, org, mapping, easter league, champions, team, goals, world, season, foot- ball, club, cup, madrid, barcelona, player, real, players, match, messi, ronaldo, liverpool, final app, google, apple, android, users, mobile, apps, phone, new, devices, device, ios, iphone, microsoft, use, also, features, user, screen, windows lee, korean, korea, kim, south, park, seoul, drama, group, bts, jin, jung, first, | 2306.16527#153 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 154 | So far, we have considered four sources of AI risk separately, but they also interact with each other in complex ways. We give some examples to illustrate how risks are connected.
Imagine, for instance, that a corporate AI race compels companies to prioritize the rapid development of AIs. This could increase organizational risks in various ways. Perhaps a company could cut costs by putting less money toward information security, leading to one of its AI systems getting leaked. This would increase the probability of someone with malicious intent having the AI system and using it to pursue their harmful objectives. Here, an AI race can increase organizational risks, which in turn can make malicious use more likely.
In another potential scenario, we could envision the combination of an intense AI race and low organiza- tional safety leading a research team to mistakenly view general capabilities advances as âsafety.â This could hasten the development of increasingly capable models, reducing the available time to learn how to make them controllable. The accelerated development would also likely feed back into competitive pressures, meaning that less effort would be spent on ensuring models were controllable. This could give rise to the release of a
43
highly powerful AI system that we lose control over, leading to a catastrophe. Here, competitive pressures and low organizational safety can reinforce AI race dynamics, which can undercut technical safety research and increase the chance of a loss of control. | 2306.12001#154 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12001 | 155 | Competitive pressures in a military environment could lead to an AI arms race, and increase the potency and autonomy of AI weapons. The deployment of AI-powered weapons, paired with insufficient control of them, would make a loss of control more deadly, potentially existential. These are just a few examples of how these sources of risk might combine, trigger, and reinforce one another.
It is also worth noting that many existential risks could arise from AIs amplifying existing concerns. Power inequality already exists, but AIs could lock it in and widen the chasm between the powerful and the powerless, even enabling an unshakable global totalitarian regime, an existential risk. Similarly, AI manipulation could undermine democracy, which also increases the existential risk of an irreversible totalitarian regime. Disinformation is already a pervasive problem, but AIs could exacerbate it beyond control, to a point where we lose a consensus on reality. AIs could develop more deadly bioweapons and reduce the required technical expertise for obtaining them, greatly increasing existing risks of bioterrorism. AI-enabled cyberattacks could make war more likely, which would increase existential risk. Dramatically accelerated economic automation could lead to eroded human control and enfeeblement, an existential risk. Each of those issuesâpower concentration, disinformation, cyberattacks, automationâis causing ongoing harm, and their exacerbation by AIs could eventually lead to a catastrophe humanity may not recover from. | 2306.12001#155 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 155 | # Mobile Applications
0.73%
# Korean Entertainment
0.11%
# Economics
1.01%
37
Video Games Time Indicators Science tion/Fantasy Music Production Transportation Personal Life American History Global Policy South Asian Affairs Sports Scores Travel/Daily Life Announcements Online Dating Superhero Comics Space Exploration Musical Performance Fic- 0.49% 0.3% 0.14% 1.09% 0.42% 1.14% 0.6% 0.96% 0.2% 0.83% 1.03% 0.83% 0.13% 0.42% 0.31% 0.57% afghanistan, taliban, india, pakistani, | 2306.16527#155 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 156 | As we can see, ongoing harms, catastrophic risks, and existential risks are deeply intertwined. Historically, existential risk reduction has focused on targeted interventions such as technical AI control research, but the time has come for broad interventions [142] like the many sociotechnical interventions outlined in this paper. In mitigating existential risk, it does not make practical sense to ignore other risks. Ignoring ongoing harms and catastrophic risks normalizes them and could lead us to âdrift into dangerâ [143]. Overall, since existential risks are connected to less extreme catastrophic risks and other standard risk sources, and because society is increasingly willing to address various risks from AIs, we believe that we should not solely focus on directly targeting existential risks. Instead, we should consider the diffuse, indirect effects of other risks and take a more comprehensive approach to risk management.
# 7 Conclusion | 2306.12001#156 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 156 | games, game, xbox, gaming, nintendo, video, play, console, playstation, mario, psNUM, one, sony, players, steam, gamers, switch, playing, titles first, years, since, time, two, NUMth, three, total, day, year, may, second, september, june, january, november, four, NUM/NUM, april star, wars, trek, lego, luke, figures, force, series, jedi, kirk, toy, universe, figure, new, ship, galaxy, crew, fans, space, disney album, sound, music, band, track, song, guitar, metal, sounds, tracks, songs, record, bass, vocals, new, release, rock, like, released, drums document, token, road, end, replaced, bike, traf- fic, driving, drivers, bus, train, driver, bridge, car, station, ride, roads, route, transport, rail life, people, love, world, many, time, one, always, years, great, every, like, way, friends, never, day, work, first, hope, best american, history, NUMs, new, first, years, century, america, early, states, united, | 2306.16527#156 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 157 | # 7 Conclusion
In this paper, we have explored how the development of advanced AIs could lead to catastrophe, stemming from four primary sources of risk: malicious use, AI races, organizational risks, and rogue AIs. This lets us decompose AI risks into four proximate causes: an intentional cause, environmental/structural cause, accidental cause, or an internal cause, respectively. We have considered ways in which AIs might be used maliciously, such as terrorists using AIs to create deadly pathogens. We have looked at how a military or corporate AI race could rush us into giving AIs decision-making powers, leading us down a slippery slope to human disempowerment. We have discussed how inadequate organizational safety could lead to catastrophic accidents. Finally, we have addressed the challenges in reliably controlling advanced AIs, including mechanisms such as proxy gaming and goal drift that might give rise to rogue AIs pursuing undesirable actions without regard for human wellbeing. These dangers warrant serious concern. Currently, very few people are working on AI risk reduction. We do not yet know how to control highly advanced AI systems, and existing control methods are already proving inadequate. The inner workings of AIs are not well understood, even by those who create them, and current AIs are by no means highly reliable. As AI capabilities continue to grow at an unprecedented rate, they could surpass human intelligence in nearly all respects relatively soon, creating a pressing need to manage the potential risks. | 2306.12001#157 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 157 | friends, never, day, work, first, hope, best american, history, NUMs, new, first, years, century, america, early, states, united, NUMth, became, world, many, one, today, time, war change, climate, development, economic, govern- ment, global, policy, need, sector, world, public, new, support, economy, national, social, future, health, impact, crisis kashmir, pakistan, bangladesh, khan, afghan, also, nepal, country, indian, kabul, jammu, singh, islamabad, ali, lahore, karachi game, points, first, season, two, three, win, second, four, team, lead, run, third, one, five, scored, home, games, point day, time, back, get, last, one, got, good, night, next, morning, went, first, trip, week, see, around, way, little new, year, first, last, time, next, NUMth, month, also, release, announced, two, months, march, since, october, september, week, may dating, gay, online, sites, date, site, tinder, free, men, best, matchmaking, meet, guy, hookup, guys, | 2306.16527#157 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 158 | The good news is that there are many courses of action we can take to substantially reduce these risks. The potential for malicious use can be mitigated by various measures, such as carefully targeted surveillance and limiting access to the most dangerous AIs. Safety regulations and cooperation between nations and corporations could help us resist competitive pressures driving us down a dangerous path. The probability of accidents can be reduced by a rigorous safety culture, among other factors, and by ensuring safety advances
44
outpace general capabilities advances. Finally, the risks inherent in building technology that surpasses our own intelligence can be addressed by redoubling efforts in several branches of AI control research.
As capabilities continue to grow, and social and systemic circumstances continue to evolve, estimates vary for when risks might reach a catastrophic or existential level. However, the uncertainty around these timelines, together with the magnitude of what could be at stake, makes a convincing case for a proactive approach to safeguarding humanityâs future. Beginning this work immediately can help ensure that this technology transforms the world for the better, and not for the worse.
# Acknowledgements | 2306.12001#158 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 158 | week, may dating, gay, online, sites, date, site, tinder, free, men, best, matchmaking, meet, guy, hookup, guys, app, apps, relationship, singles, dates comic, marvel, comics, man, batman, spider, super- hero, character, avengers, superman, universe, hero, captain, new, heroes, fans, issue, super, characters, also space, nasa, mission, mars, drone, launch, rocket, satellite, robot, earth, robots, drones, moon, first, station, orbit, satellites, spacecraft, technology music, jazz, musical, concert, piano, orchestra, com- poser, musicians, classical, symphony, played, per- formance, playing, performed, piece, work, instru- ments, also, festival, instrument money, pay, card, credit, bank, cash, vegas, pay- ment, paid, account, las, payments, fees, cost, cards, amount, buy, service, fee shows, show, episodes, television, comedy, watch, cast, fans, also, new, seasons, character, drama, viewers, first | 2306.16527#158 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 159 | # Acknowledgements
We would like to thank Laura Hiscott, Avital Morris, David Lambert, Kyle Gracey, and Aidan OâGara for assistance in drafting this paper. We would also like to thank Jacqueline Harding, Nate Sharadin, William DâAlessandro, Cameron Domenico Kirk-Gianini, Simon Goldstein, Alex Tamkin, Adam Khoja, Oliver Zhang, Jack Cunningham, Lennart Justen, Davy Deng, Ben Snyder, Willy Chertman, Justis Mills, Adam Jones, Hadrien Pouget, Nathan Calvin, Eric Gan, Nikola Jurkovic, Lukas Finnveden, Ryan Greenblatt, and Andrew Doris for helpful feedback.
# References
[1] David Malin Roodman. On the probability distribution of long-term changes in the growth rate of the global economy: An outside view. 2020.
[2] Tom Davidson. Could Advanced AI Drive Explosive Economic Growth? Tech. rep. June 2021.
[3] Carl Sagan. Pale Blue Dot: A Vision of the Human Future in Space. New York: Random House, 1994.
[4] Roman V Yampolskiy. âTaxonomy of Pathways to Dangerous Artificial Intelligenceâ. In: AAAI Workshop: AI, Ethics, and Society. 2016. | 2306.12001#159 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 159 | # Personal Finance
0.17%
# Television Shows
0.74%
series,
season, episode, netflix,
38
Celebrity Culture Environmental Conser- vation Physical/Quantum Sci- ences Astronomy Islamic/Middle East- ern Culture Gender Issues Fantasy/Mythology Video Game Mechanics MMORPG Gaming Energy and Environ- ment Financial Regulations US Legislation Subjective Experience Parenthood Personal Experiences 0.11% 0.32% 0.35% 0.37% 0.19% 0.14% 0.03% 0.36% 1.16% 0.65% 0.57% 0.75% 0.91% 0.16% 1.93% islam, | 2306.16527#159 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 160 | [4] Roman V Yampolskiy. âTaxonomy of Pathways to Dangerous Artificial Intelligenceâ. In: AAAI Workshop: AI, Ethics, and Society. 2016.
[5] Keith Olson. âAum Shinrikyo: once and future threat?â In: Emerging Infectious Diseases 5 (1999), pp. 513â516.
[6] Kevin M. Esvelt. Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics. 2022.
Siro Igino Trevisanato. âThe âHittite plagueâ, an epidemic of tularemia and the first record of biological warfare.â In: Medical hypotheses 69 6 (2007), pp. 1371â4.
[8] U.S. Department of State. Adherence to and Compliance with Arms Control, Nonproliferation, and Disarmament Agreements and Commitments. Government Report. U.S. Department of State, Apr. 2022.
[9] Robert Carlson. âThe changing economics of DNA synthesisâ. en. In: Nature Biotechnology 27.12 (Dec. 2009). Number: 12 Publisher: Nature Publishing Group, pp. 1091â1094. | 2306.12001#160 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 160 | taylor, jackson, justin, swift, star, jennifer, singer, jay, tyler, cohen, nicole, spencer, also, eddie, cole, carrie, amy, lopez, bieber, casey water, river, land, environmental, forest, wildlife, conservation, area, natural, lake, areas, project, en- vironment, rivers, dam, resources, forests, national, management water, air, chemical, used, process, material, sur- face, materials, quantum, temperature, high, oxy- gen, carbon, radiation, particles, liquid, salt, energy, pollution, chemicals earth, sun, moon, planet, sky, stars, solar, star, space, light, universe, planets, telescope, years, sci- entists, system, galaxy, eclipse, dark islamic, arabia, muslim, saudi, muslims, egypt, arab, dubai, allah, uae, ali, middle, abu, prophet, religious, muhammad, mosque, iran, egyp- tian women, men, woman, female, girls, gender, male, abortion, sexual, girl, young, sex, life, equality, feminist, man, violence, ladies, rights, | 2306.16527#160 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 161 | [10] Sarah R. Carter, Jaime M. Yassif, and Chris Isaac. Benchtop DNA Synthesis Devices: Capabilities, Biosecurity Implications, and Governance. Report. Nuclear Threat Initiative, 2023.
Fabio L. Urbina et al. âDual use of artificial-intelligence-powered drug discoveryâ. In: Nature Machine Intelligence (2022).
John Jumper et al. âHighly accurate protein structure prediction with AlphaFoldâ. In: Nature 596.7873 (2021), pp. 583â589.
[13] Zachary Wu et al. âMachine learning-assisted directed protein evolution with combinatorial librariesâ. In: Proceedings of the National Academy of Sciences 116.18 (2019), pp. 8852â8858.
[14] Emily Soice et al. âCan large language models democratize access to dual-use biotechnology?â In: 2023.
# YA ME
[15] Max Tegmark. Life 3.0: Being human in the age of artificial intelligence. Vintage, 2018.
[16] Leanne Pooley. We Need To Talk About A.I. 2020. | 2306.12001#161 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 161 | women, men, woman, female, girls, gender, male, abortion, sexual, girl, young, sex, life, equality, feminist, man, violence, ladies, rights, boys sam, lewis, max, rings, twin, troy, monkey, toy, stephen, palmer, doll, hobbit, tolkien, zeus, lord, monkeys, seth, horse, toys, witch attack, damage, enemy, pokemon, use, weapon, enemies, level, also, fight, battle, attacks, players, power, weapons, ability, magic, hero, character, armor game, games, players, play, new, player, world, play- ing, characters, gameplay, mode, character, also, story, battle, fun, experience, free, fantasy energy, oil, gas, power, carbon, solar, fuel, emis- sions, electricity, climate, wind, renewable, coal, natural, green, production, industry, fossil, environ- mental tax, financial, bank, government, debt, income, banks, money, taxes, budget, economy, finance, loan, pay, billion, loans, credit, economic, fund state, bill, would, federal, house, senate, congress, law, legislation, act, states, | 2306.16527#161 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 162 | [16] Leanne Pooley. We Need To Talk About A.I. 2020.
[17] Richard Sutton [@RichardSSutton]. It will be the greatest intellectual achievement of all time. An achievement of science, of engineering, and of the humanities, whose significance is beyond humanity, beyond life, beyond good and bad. en. Tweet. Sept. 2022.
45
[18] Richard Sutton. AI Succession. Video. Sept. 2023.
[19] A. Sanz-GarcÃa et al. âPrevalence of Psychopathy in the General Adult Population: A Systematic Review and Meta-Analysisâ. In: Frontiers in Psychology 12 (2021).
[20] U.S. Department of State Office of The Historian. âU.S. Diplomacy and Yellow Journalism, 1895â1898â. In: ().
[21] Onur Varol et al. âOnline Human-Bot Interactions: Detection, Estimation, and Characterizationâ. In: ArXiv abs/1703.03107 (2017).
[22] Matthew Burtell and Thomas Woodside. âArtificial Influence: An Analysis Of AI-Driven Persuasionâ. In: ArXiv abs/2303.08721 (2023). | 2306.12001#162 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 162 | pay, billion, loans, credit, economic, fund state, bill, would, federal, house, senate, congress, law, legislation, act, states, governor, government, passed, public, committee, lawmakers, plan, fund- ing like, good, really, one, well, much, great, bit, even, little, quite, also, though, still, pretty, lot, see, get, better, would children, child, kids, parents, baby, age, young, birth, parent, pregnancy, pregnant, family, families, babies, adults, mother, old, early, mothers like, get, one, know, got, really, good, little, even, think, guy, thing, going, love, pretty, right, let, much, never, back school, students, education, schools, college, stu- dent, high, university, class, teachers, year, teacher, campus, program, learning, teaching, classes, chil- dren, grade, parents mexico, spanish, italian, spain, italy, san, mexi- can, latin, puerto, del, cuba, rico, colombia, costa, america, cuban, venezuela, juan, country | 2306.16527#162 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 163 | [23] Anna Tong. âWhat happens when your AI chatbot stops loving you back?â In: Reuters (Mar. 2023).
Pierre-François Lovens. âSans ces conversations avec le chatbot Eliza, mon mari serait toujours là â. In: La Libre (Mar. 2023).
[25] Cristian Vaccari and Andrew Chadwick. âDeepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in Newsâ. In: Social Media + Society 6 (2020).
[26] Moin Nadeem, Anna Bethke, and Siva Reddy. âStereoSet: Measuring stereotypical bias in pretrained language modelsâ. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Online: Association for Computational Linguistics, Aug. 2021, pp. 5356â5371.
[27] Evan G. Williams. âThe Possibility of an Ongoing Moral Catastropheâ. en. In: Ethical Theory and Moral Practice 18.5 (Nov. 2015), pp. 971â982. | 2306.12001#163 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12001 | 164 | [28] The Nucleic Acid Observatory Consortium. âA Global Nucleic Acid Observatory for Biodefense and Planetary Healthâ. In: ArXiv abs/2108.02678 (2021).
[29] Toby Shevlane. âStructured access to AI capabilities: an emerging paradigm for safe AI deploymentâ. In: ArXiv abs/2201.05159 (2022).
Jonas Schuett et al. Towards best practices in AGI safety and governance: A survey of expert opinion. 2023. arXiv: 2305.07153.
[31] Yonadav Shavit. âWhat does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoringâ. In: ArXiv abs/2303.11341 (2023).
[32] Anat Lior. âAI Entities as AI Agents: Artificial Intelligence Liability and the AI Respondeat Superior Analogyâ. In: Torts & Products Liability Law eJournal (2019).
[33] Maximilian Gahntz and Claire Pershan. Artificial Intelligence Act: How the EU can take on the challenge posed by general-purpose AI systems. Nov. 2022.
[34] Paul Scharre. Army of None: Autonomous Weapons and The Future of War. Norton, 2018. | 2306.12001#164 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 164 | system, new, technology, systems, development, also, use, time, process, high, based, performance, work, used, well, using, provide, quality, level, de- veloped rights, people, government, human, violence, protest, freedom, police, country, protests, law, civil, political, protesters, movement, state, justice, activists, right, groups scott, ryan, wilson, joe, anderson, wave, josh, sarah, phil, surf, jackie, waves, robinson, logan, beach, ken, surfing, phoenix, duncan, gibson brazil, brazilian, miller, rio, phillips, paulo, por- tuguese, peterson, grande, são, janeiro, ivy, bol- sonaro, herman, silva, state, amazon, sao, spike, hernandez poetry, writing, essay, writer, poem, poems, literary, literature, work, poet, book, published, writers, wrote, write, english, works, collection, written, life family, years, wife, home, mary, born, school, life, funeral, friends, died, church, death, service, many, member, may, mrs, passed | 2306.16527#164 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 165 | [34] Paul Scharre. Army of None: Autonomous Weapons and The Future of War. Norton, 2018.
[35] DARPA. âAlphaDogfight Trials Foreshadow Future of Human-Machine Symbiosisâ. In: (2020).
[36] Panel of Experts on Libya. Letter dated 8 March 2021 from the Panel of Experts on Libya established pursuant to resolution 1973 (2011) addressed to the President of the Security Council. United Nations Security Council Document S/2021/229. United Nations, Mar. 2021.
[37] David Hambling. Israel used worldâs first AI-guided combat drone swarm in Gaza attacks. 2021.
[38] Zachary Kallenborn. Applying arms-control frameworks to autonomous weapons. en-US. Oct. 2021.
J.E. Mueller. War, Presidents, and Public Opinion. UPA book. University Press of America, 1985.
[39]
[40] Matteo E. Bonfanti. âArtificial intelligence and the offenseâdefense balance in cyber securityâ. In: Cyber Security Politics: Socio-Technological Transformations and Political Fragmentation. Ed. by M.D. Cavelty and A. Wenger. CSS Studies in Security and International Relations. Taylor & Francis, 2022. Chap. 5, pp. 64â79. | 2306.12001#165 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 165 | years, wife, home, mary, born, school, life, funeral, friends, died, church, death, service, many, member, may, mrs, passed cricket, india, test, match, runs, team, england, series, first, wickets, ipl, overs, game, tNUM, played, indian, ball, innings, captain canada, canadian, ireland, irish, toronto, ontario, vancouver, dublin, province, alberta, northern, canadians, ottawa, montreal, provincial, centre, quebec, north, trudeau music, album, song, artists, artist, hip, single, hop, released, new, songs, rapper, track, video, rap, pop, release, hit, singer prison, crime, criminal, court, charges, sexual, trial, case, jail, years, crimes, guilty, victims, murder, abuse, accused, sentence, justice, convicted university, research, science, professor, institute, studies, college, scientific, school, work, study, en- gineering, national, international, department, stu- dents, degree, academic, center williams, hill, ross, carter, kennedy, clark, | 2306.16527#165 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 166 | [41] Yisroel Mirsky et al. âThe Threat of Offensive AI to Organizationsâ. In: Computers & Security (2023).
[42] Kim Zetter. âMeet MonsterMind, the NSA Bot That Could Wage Cyberwar Autonomouslyâ. In: Wired (Aug. 2014).
[43] Andrei Kirilenko et al. âThe Flash Crash: High-Frequency Trading in an Electronic Marketâ. In: The Journal of Finance 72.3 (2017), pp. 967â998.
46
[44] Michael C Horowitz. The Diffusion of Military Power: Causes and Consequences for International Politics. Princeton University Press, 2010.
[45] Robert E. Jervis. âCooperation under the Security Dilemmaâ. In: World Politics 30 (1978), pp. 167â214.
[46] Richard Danzig. Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority. Tech. rep. Center for a New American Security, June 2018.
[47] Billy Perrigo. Bingâs AI Is Threatening Users. Thatâs No Laughing Matter. en. Feb. 2023. | 2306.12001#166 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 166 | national, international, department, stu- dents, degree, academic, center williams, hill, ross, carter, kennedy, clark, jan, nel- son, jordan, stanley, rated, murphy, arthur, mar- shall, hudson, feb, nov, oct, mar weather, ice, snow, mountain, winter, north, tem- peratures, cold, climate, south, high, lake, rain, temperature, east, west, summer, conditions, ski blood, brain, disease, symptoms, may, heart, pa- tients, body, treatment, also, cause, risk, pain, con- dition, effects, common, severe, doctor, pressure bitcoin, blockchain, crypto, cryptocurrency, digital, mining, ethereum, cryptocurrencies, currency, ex- change, btc, market, network, tokens, users, price, nft, trading, transactions, token food, diet, weight, health, body, fat, eating, foods, eat, sugar, healthy, also, high, diabetes, people, meat, protein, obesity, levels back, get, time, take, right, move, way, next, see, start, around, keep, make, end, away, going, one, left, another, | 2306.16527#166 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 167 | [47] Billy Perrigo. Bingâs AI Is Threatening Users. Thatâs No Laughing Matter. en. Feb. 2023.
[48] Nico Grant and Karen Weise. âIn A.I. Race, Microsoft and Google Choose Speed Over Cautionâ. en-US. In: The New York Times (Apr. 2023).
[49] Thomas H. Klier. âFrom Tail Fins to Hybrids: How Detroit Lost Its Dominance of the U.S. Auto Marketâ. In: RePEc (May 2009).
[50] Robert Sherefkin. âFord 100: Defective Pinto Almost Took Fordâs Reputation With Itâ. In: Automotive News (June 2003).
[51] Lee Strobel. Reckless Homicide?: Fordâs Pinto Trial. en. And Books, 1980.
[52] Grimshaw v. Ford Motor Co. May 1981.
[53] Paul C. Judge. âSelling Autos by Selling Safetyâ. en-US. In: The New York Times (Jan. 1990).
[54] Theo Leggett. â737 Max crashes: Boeing says not guilty to fraud chargeâ. en-GB. In: BBC News (Jan. 2023). | 2306.12001#167 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12001 | 168 | [54] Theo Leggett. â737 Max crashes: Boeing says not guilty to fraud chargeâ. en-GB. In: BBC News (Jan. 2023).
[55] Edward Broughton. âThe Bhopal disaster and its aftermath: a reviewâ. In: Environmental Health 4.1 (May 2005), p. 6.
[56] Charlotte Curtis. âMachines vs. Workersâ. en-US. In: The New York Times (Feb. 1983).
[57] Thomas Woodside et al. âExamples of AI Improving AIâ. In: (2023). URL: https://ai- improving- ai.safe.ai.
[58] Stuart Russell. Human Compatible: Artificial Intelligence and the Problem of Control. en. Penguin, Oct. 2019.
[59] Dan Hendrycks. âNatural Selection Favors AIs over Humansâ. In: ArXiv abs/2303.16200 (2023).
[60] Dan Hendrycks. The Darwinian Argument for Worrying About AI. en. May 2023.
[61] Richard C. Lewontin. âThe Units of Selectionâ. In: Annual Review of Ecology, Evolution, and Systematics 1 (1970), pp. 1â18. | 2306.12001#168 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12001 | 169 | [62] Ethan Kross et al. âFacebook use predicts declines in subjective well-being in young adultsâ. In: PloS one (2013).
[63] Laura MartÃnez-Ãñigo et al. âIntercommunity interactions and killings in central chimpanzees (Pan troglodytes troglodytes) from Loango National Park, Gabonâ. In: Primates; Journal of Primatology 62 (2021), pp. 709â722.
[64] Anne E Pusey and Craig Packer. âInfanticide in Lions: Consequences and Counterstrategiesâ. In: Infanticide and parental care (1994), p. 277.
Peter D. Nagy and Judit Pogany. âThe dependence of viral RNA replication on co-opted host factorsâ. In: Nature Reviews. Microbiology 10 (2011), pp. 137â149.
[66] Alfred Buschinger. âSocial Parasitism among Ants: A Reviewâ. In: Myrmecological News 12 (Sept. 2009), pp. 219â235.
[67] Greg Brockman, Ilya Sutskever, and OpenAI. Introducing OpenAI. Dec. 2015.
[68] Devin Coldewey. OpenAI shifts from nonprofit to âcapped-profitâ to attract capital. Mar. 2019. | 2306.12001#169 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 169 | NUMth, town, village, name, william, george, cen- tury, hall, john, family, built, castle, early, house, mill, street, history, became, morris power, light, battery, use, control, device, used, system, led, also, using, devices, high, signal, air, electrical, switch, low, sensor theatre, show, dance, stage, play, theater, perfor- mance, production, audience, musical, opera, arts, broadway, dancing, cast, performances, performing, company, ballet, shakespeare mental, people, health, disorder, depression, help, self, anxiety, stress, emotional, person, life, physical, may, often, brain, also, social, autism, feel post, blog, read, comments, posted, like, would, one, see, com, please, know, article, share, site, email, comment, posts, link, page drug, drugs, cannabis, marijuana, use, cbd, medical, effects, addiction, fda, used, alcohol, cocaine, sub- stance, prescription, heroin, treatment, products, thc, also tree, trees, trail, water, road, river, along, forest, | 2306.16527#169 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 170 | [68] Devin Coldewey. OpenAI shifts from nonprofit to âcapped-profitâ to attract capital. Mar. 2019.
[69] Kyle Wiggers, Devin Coldewey, and Manish Singh. Anthropicâs $5B, 4-year plan to take on OpenAI. Apr. 2023.
[70] Center for AI Safety. Statement on AI Risk (âMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.â) 2023. URL: https://www.safe. ai/statement-on-ai-risk.
[71] Richard Danzig et al. Aum Shinrikyo: Insights into How Terrorists Develop Biological and Chemical Weapons. Tech. rep. Center for a New American Security, 2012. URL: https : / / www . jstor . org / stable / resrep06323.
[72] Timnit Gebru et al. âDatasheets for datasetsâ. en. In: Communications of the ACM 64.12 (Dec. 2021), pp. 86â92.
47
[73] Christian Szegedy et al. âIntriguing properties of neural networksâ. In: CoRR (Dec. 2013). | 2306.12001#170 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 170 | sub- stance, prescription, heroin, treatment, products, thc, also tree, trees, trail, water, road, river, along, forest, area, around, small, park, one, near, old, wood, way, hill, across, ground red, blue, white, green, black, yellow, color, light, flag, orange, grey, colors, gray, logo, one, pearl, hat, look, colour, two israel, israeli, fish, palestinian, jerusalem, fishing, gaza, palestinians, netanyahu, hamas, jewish, bank, west, palestine, state, arab, israelis, trout, salmon airport, flight, aircraft, air, airlines, plane, flights, travel, airline, passengers, aviation, flying, fly, inter- national, airports, pilot, passenger, boeing, service plastic, waste, made, used, use, bags, make, bag, paper, items, nike, fabric, shoes, cola, using, coca, trash, recycling, also, shoe would, even, one, could, however, much, fact, yet, rather, far, though, many, well, might, perhaps, less, long, despite, may, | 2306.16527#170 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 171 | 47
[73] Christian Szegedy et al. âIntriguing properties of neural networksâ. In: CoRR (Dec. 2013).
[74] Dan Hendrycks et al. âUnsolved Problems in ML Safetyâ. In: arXiv preprint arXiv:2109.13916 (2021).
John Uri. 35 Years Ago: Remembering Challenger and Her Crew. und. Text. Jan. 2021.
[76]
International Atomic Energy Agency. The Chernobyl Accident: Updating of INSAG-1. Technical Report INSAG-7. Vienna, Austria: International Atomic Energy Agency, 1992.
[77] Matthew Meselson et al. âThe Sverdlovsk anthrax outbreak of 1979.â In: Science 266 5188 (1994), pp. 1202â8.
[78] Daniel M Ziegler et al. âFine-tuning language models from human preferencesâ. In: arXiv preprint arXiv:1909.08593 (2019).
[79] Charles Perrow. Normal Accidents: Living with High-Risk Technologies. Princeton, NJ: Princeton University Press, 1984. | 2306.12001#171 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 171 | would, even, one, could, however, much, fact, yet, rather, far, though, many, well, might, perhaps, less, long, despite, may, time could, problem, many, may, problems, due, however, issues, issue, would, even, also, cause, result, still, time, situation, damage, impact, without gun, shooting, guns, malaysia, hunting, rifle, firearms, shot, deer, weapons, shoot, weapon, malaysian, pistol, firearm, ammunition, rmNUM, hunt, buck disney, magic, world, ray, animation, alice, walt, fairy, ride, parks, disneyland, park, animated, theme, magical, pixar, jungle, studios, orlando, characters syria, turkey, forces, iraq, military, security, attacks, attack, killed, syrian, terrorist, turkish, war, people, state, group, isis, terrorism, terrorists, government eyes, like, face, could, head, hand, back, little, looked, hands, said, around, look, body, would, voice, see, away, hair, felt building, house, room, space, built, floor, construc- tion, wall, buildings, new, | 2306.16527#171 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 172 | [79] Charles Perrow. Normal Accidents: Living with High-Risk Technologies. Princeton, NJ: Princeton University Press, 1984.
[80] Mitchell Rogovin and George T. Frampton Jr. Three Mile Island: a report to the commissioners and to the public. Volume I. English. Tech. rep. NUREG/CR-1250(Vol.1). Nuclear Regulatory Commission, Washington, DC (United States). Three Mile Island Special Inquiry Group, Jan. 1979.
[81] Richard Rhodes. The Making of the Atomic Bomb. New York: Simon & Schuster, 1986.
[82] Sébastien Bubeck et al. âSparks of Artificial General Intelligence: Early experiments with GPT-4â. In: ArXiv abs/2303.12712 (2023).
[83] Theodore I. Lidsky and Jay S. Schneider. âLead neurotoxicity in children: basic mechanisms and clinical correlates.â In: Brain : a journal of neurology 126 Pt 1 (2003), pp. 5â19.
[84] Brooke T. Mossman et al. âAsbestos: scientific developments and implications for public policy.â In: Science 247 4940 (1990), pp. 294â301. | 2306.12001#172 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12001 | 173 | [85] Kate Moore. The Radium Girls: The Dark Story of Americaâs Shining Women. Naperville, IL: Sourcebooks, 2017.
Stephen S. Hecht. âTobacco smoke carcinogens and lung cancer.â In: Journal of the National Cancer Institute 91 14 (1999), pp. 1194â210.
[87] Mario J. Molina and F. Sherwood Rowland. âStratospheric sink for chlorofluoromethanes: chlorine atomc-atalysed destruction of ozoneâ. In: Nature 249 (1974), pp. 810â812.
James H. Kim and Anthony R. Scialli. âThalidomide: the tragedy of birth defects and the effective treatment of disease.â In: Toxicological sciences : an official journal of the Society of Toxicology 122 1 (2011), pp. 1â6.
[89] Betul Keles, Niall McCrae, and Annmarie Grealish. âA systematic review: the influence of social media on depression, anxiety and psychological distress in adolescentsâ. In: International Journal of Adolescence and Youth 25 (2019), pp. 79â93.
[90] Zakir Durumeric et al. âThe Matter of Heartbleedâ. In: Proceedings of the 2014 Conference on Internet Measure- ment Conference (2014). | 2306.12001#173 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 173 | # Middle Eastern Con- flict
0.81%
# Physical Descriptions
0.48%
# Architecture
0.62%
41
Travel Destinations Computer Hardware African Nations Military Operations Tobacco and Cookies Nigerian Politics Family Dynamics Farming and Agricul- ture Retail Industry Online Resources Personal Experiences Theology and Morality Sports and Games Asia and Pacific Healthcare 0.94% 0.41% 0.17% 0.37% 0.15% 0.67% 0.54% 0.4% 0.27% 0.32% 2.07% 0.45% 1.29% 0.07% 0.27% | 2306.16527#173 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 174 | [90] Zakir Durumeric et al. âThe Matter of Heartbleedâ. In: Proceedings of the 2014 Conference on Internet Measure- ment Conference (2014).
[91] Tony Tong Wang et al. âAdversarial Policies Beat Professional-Level Go AIsâ. In: ArXiv abs/2211.00241 (2022).
[92] T. R. Laporte and Paula M. Consolini. âWorking in Practice But Not in Theory: Theoretical Challenges of âHigh-Reliability Organizationsââ. In: Journal of Public Administration Research and Theory 1 (1991), pp. 19â48.
[93] Thomas G. Dietterich. âRobust artificial intelligence and robust human organizationsâ. In: Frontiers of Computer Science 13 (2018), pp. 1â3.
[94] Nancy G Leveson. Engineering a safer world: Systems thinking applied to safety. The MIT Press, 2016.
[95] David Manheim. Building a Culture of Safety for AI: Perspectives and Challenges. 2023.
[96] National Research Council et al. Lessons Learned from the Fukushima Nuclear Accident for Improving Safety of U.S. Nuclear Plants. Washington, D.C.: National Academies Press, Oct. 2014. | 2306.12001#174 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 174 | city, hotel, park, one, visit, tour, world, town, place, travel, area, many, also, trip, beautiful, places, visi- tors, located, island intel, performance, computer, memory, amd, core, graphics, usb, windows, laptop, drive, cpu, card, power, nvidia, hardware, gpu, processor, gaming africa, south, african, kenya, country, cape, uganda, rNUM, zimbabwe, continent, national, congo, africans, west, tanzania, president, town, johan- nesburg, rwanda, nairobi military, army, war, soldiers, forces, troops, general, service, battle, soldier, commander, men, armed, corps, force, command, training, unit, guard, com- bat cookies, website, smoking, use, tobacco, cigarettes, buy, smoke, experience, cigar, cookie, necessary, used, ivermectin, cigarette, consent, online, may, vaping, also state, nigeria, said, government, nigerian, gover- nor, president, ghana, lagos, buhari, also, nNUM, nigerians, country, national, federal, people, apc, security, | 2306.16527#174 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 175 | [97] Diane Vaughan. The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. Chicago, IL: University of Chicago Press, 1996.
[98] Dan Lamothe. Air Force Swears: Our Nuke Launch Code Was Never â00000000â. Jan. 2014.
[99] Toby Ord. The precipice: Existential risk and the future of humanity. Hachette Books, 2020.
[100] U.S. Nuclear Regulatory Commission. Final Safety Culture Policy Statement. Federal Register. 2011.
48
[101] Bruce Schneier. âInside the Twisted Mind of the Security Professionalâ. In: Wired (Mar. 2008).
[102] Dan Hendrycks and Mantas Mazeika. âX-Risk Analysis for AI Researchâ. In: ArXiv abs/2206.05862 (2022).
[103] CSRC Content Editor. Red Team - Glossary. EN-US.
[104] Amba Kak and Sarah West. Confronting Tech Power. 2023.
[105] Nassim Nicholas Taleb. âThe Fourth Quadrant: A Map of the Limits of Statisticsâ. In: Edge, 2008.
Irene Solaiman et al. âRelease strategies and the social impacts of language modelsâ. In: arXiv preprint arXiv:1908.09203 (2019).
[106] | 2306.12001#175 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 175 | gover- nor, president, ghana, lagos, buhari, also, nNUM, nigerians, country, national, federal, people, apc, security, abuja family, father, mother, son, old, daughter, home, children, years, year, parents, wife, young, brother, life, dad, two, house, sister plant, farmers, farm, food, plants, agriculture, gar- den, soil, agricultural, seeds, grow, growing, seed, crop, crops, production, farming, farms, fruit, har- vest store, market, products, sales, amazon, stores, cus- tomers, price, company, business, retail, product, buy, shop, online, consumers, brand, shopping, sell, selling download, information, free, page, available, online, book, edition, website, pdf, article, site, published, library, content, please, text, may, read would, time, could, one, didn, first, back, got, went, years, came, wanted, made, started, took, never, day, wasn, thought, even god, man, one, lord, world, life, earth, upon, power, may, spirit, human, evil, love, heaven, gods, soul, | 2306.16527#175 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 176 | [106]
[107] Neal Woollen. Incident Response (Why Planning is Important).
[108] Huashan Li et al. âThe impact of chief risk officer appointments on firm risk and operational efficiencyâ. In: Journal of Operations Management (2022).
[109] Role of Internal Audit. URL: https://www.marquette.edu/riskunit/internalaudit/role. shtml.
[110] Heather Adkins et al. Building Secure and Reliable Systems: Best Practices for Designing, Implementing, and Maintaining Systems. OâReilly Media, 2020.
[111] Center for Security and Emerging Technology. AI Safety â Emerging Technology Observatory Research Almanac. 2023.
[112] Donald T Campbell. âAssessing the impact of planned social changeâ. In: Evaluation and program planning 2.1 (1979), pp. 67â90.
[113] Yohan J. John et al. âDead rats, dopamine, performance metrics, and peacock tails: proxy failure is an in- herent risk in goal-oriented systemsâ. In: Behavioral and Brain Sciences (2023), pp. 1â68. DOI: 10.1017/ S0140525X23002753.
Jonathan Stray. âAligning AI Optimization to Community Well-Beingâ. In: International Journal of Community Well-Being (2020). | 2306.12001#176 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 176 | thought, even god, man, one, lord, world, life, earth, upon, power, may, spirit, human, evil, love, heaven, gods, soul, must, every, shall season, game, team, football, nfl, yards, baseball, games, players, league, coach, field, play, year, player, bowl, quarterback, teams, first japan, japanese, tokyo, vietnam, indonesia, pa- cific, hawaii, island, vietnamese, indonesian, islands, asian, also, asia, west, rice, jakarta, abe, hawaiian health, care, medical, hospital, patients, doctors, healthcare, patient, treatment, services, medicine, doctor, hospitals, hiv, nursing, nurses, emergency, insurance, nurse, staff day, memorial, anniversary, national, NUMth, cere- mony, veterans, flag, honor, statue, cemetery, peo- ple, nation, war, country, president, service, years, monument gold, collection, silver, watch, auction, box, original, sold, coin, coins, one, made, sale, watches, design, set, edition, also, rare | 2306.16527#176 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 177 | Jonathan Stray. âAligning AI Optimization to Community Well-Beingâ. In: International Journal of Community Well-Being (2020).
Jonathan Stray et al. âWhat are you optimizing for? Aligning Recommender Systems with Human Valuesâ. In: ArXiv abs/2107.10939 (2021).
[116] Ziad Obermeyer et al. âDissecting racial bias in an algorithm used to manage the health of populationsâ. In: Science 366 (2019), pp. 447â453.
[117] Dario Amodei and Jack Clark. Faulty reward functions in the wild. 2016.
[118] Alexander Pan, Kush Bhatia, and Jacob Steinhardt. âThe effects of reward misspecification: Mapping and mitigating misaligned modelsâ. In: ICLR (2022).
[119] G. Thut et al. âActivation of the human brain by monetary rewardâ. In: Neuroreport 8.5 (1997), pp. 1225â1228.
[120] Edmund T. Rolls. âThe Orbitofrontal Cortex and Rewardâ. In: Cerebral Cortex 10.3 (Mar. 2000), pp. 284â294.
[121] T. Schroeder. Three Faces of Desire. Philosophy of Mind Series. Oxford University Press, USA, 2004. | 2306.12001#177 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 177 | # Commemorations
0.21%
# Collectibles and Auc- tions
0.32%
42
East Asia Maritime Exploration Natural Disasters Legal Matters Dimensions and Posi- tioning Relationships and Mar- riage Community Projects Photography Competitive Sports Innovation and Science Personal Opinions Statistics Personal Communica- tion Animal Companions Scientific Research 0.18% 0.4% 0.39% 0.69% 0.47% 0.18% 0.84% 0.26% 0.88% 0.57% 1.87% 0.99% 0.15% 0.3% 0.41% | 2306.16527#177 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 178 | [121] T. Schroeder. Three Faces of Desire. Philosophy of Mind Series. Oxford University Press, USA, 2004.
Joseph Carlsmith. âExistential Risk from Power-Seeking AIâ. In: Oxford University Press (2023).
[122]
[123]
John Mearsheimer. âStructural realismâ. In: Oxford University Press, 2007.
[124] Bowen Baker et al. âEmergent Tool Use From Multi-Agent Autocurriculaâ. In: International Conference on Learning Representations. 2020.
[125] Dylan Hadfield-Menell et al. âThe Off-Switch Gameâ. In: ArXiv abs/1611.08219 (2016).
[126] Alexander Pan et al. âDo the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the Machiavelli Benchmark.â In: ICML (2023).
âLyndon Baines Johnsonâ. In: Oxford Reference (2016).
[128] Anton Bakhtin et al. âHuman-level play in the game of Diplomacy by combining language models with strategic reasoningâ. In: Science 378 (2022), pp. 1067â1074. | 2306.12001#178 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 178 | china, chinese, kong, hong, singapore, philippines, beijing, taiwan, thailand, shanghai, asia, also, thai, province, asian, country, philippine, city, manila sea, island, ship, boat, ocean, water, coast, beach, bay, ships, marine, islands, boats, cruise, port, wa- ters, crew, fishing, sailing fire, people, storm, hurricane, disaster, emergency, fires, damage, flood, earthquake, rescue, smoke, flooding, firefighters, homes, residents, burning, hit, area court, law, case, judge, legal, supreme, justice, de- cision, attorney, filed, trial, cases, courts, lawyer, lawyers, lawsuit, appeal, ruling, judges two, side, one, top, right, back, cut, line, use, small, used, hand, like, left, body, front, size, using, around marriage, sex, relationship, married, wedding, love, couple, sexual, divorce, man, husband, wife, cou- ples, together, woman, partner, men, one, relation- ships, bride community, support, group, people, | 2306.16527#178 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 179 | [129] Paul Christiano et al. Deep reinforcement learning from human preferences. Discussed in https://www. deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity. 2017. arXiv: 1706.03741.
49
[130] Xinyun Chen et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. 2017. arXiv: 1712.05526.
[131] Andy Zou et al. Benchmarking Neural Network Proxy Robustness to Optimization Pressure. 2023.
[132] Miles Turpin et al. âLanguage Models Donât Always Say What They Think: Unfaithful Explanations in Chain-of- Thought Promptingâ. In: ArXiv abs/2305.04388 (2023).
[133] Collin Burns et al. âDiscovering Latent Knowledge in Language Models Without Supervisionâ. en. In: The Eleventh International Conference on Learning Representations. Feb. 2023.
[134] Andy Zou et al. Representation engineering: Understanding and controlling the inner workings of neural networks. 2023.
[135] Catherine Olsson et al. âIn-context Learning and Induction Headsâ. In: ArXiv abs/2209.11895 (2022). | 2306.12001#179 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 179 | divorce, man, husband, wife, cou- ples, together, woman, partner, men, one, relation- ships, bride community, support, group, people, members, pro- gram, help, local, foundation, event, also, work, organization, part, project, together, youth, young, year image, camera, images, photo, photos, NUMd, pho- tography, pictures, cameras, picture, light, lens, photographer, capture, photographs, taken, shot, look, using, shoot team, players, teams, cup, tournament, world, foot- ball, competition, final, round, golf, play, club, match, first, won, league, win, sports world, human, new, reality, create, like, time, life, future, nature, work, experience, way, process, space, ideas, different, form, idea, science people, know, like, think, say, even, want, make, one, something, things, someone, way, doesn, would, good, need, person, feel, never percent, per, year, number, according, cent, av- erage, report, increase, years, rate, million, data, population, last, people, increased, growth, higher said, would, told, people, | 2306.16527#179 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 180 | [135] Catherine Olsson et al. âIn-context Learning and Induction Headsâ. In: ArXiv abs/2209.11895 (2022).
[136] Kevin Ro Wang et al. âInterpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 Smallâ. en. In: The Eleventh International Conference on Learning Representations. Feb. 2023.
[137] Xinyang Zhang, Zheng Zhang, and Ting Wang. âTrojaning Language Models for Fun and Profitâ. In: 2021 IEEE European Symposium on Security and Privacy (EuroS&P) (2020), pp. 179â197.
Jiashu Xu et al. âInstructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Modelsâ. In: ArXiv abs/2305.14710 (2023).
[139] Dan Hendrycks et al. âUnsolved Problems in ML Safetyâ. In: ArXiv abs/2109.13916 (2021).
[140] Nora Belrose et al. âLEACE: Perfect linear concept erasure in closed formâ. In: ArXiv abs/2306.03819 (2023). | 2306.12001#180 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 180 | cent, av- erage, report, increase, years, rate, million, data, population, last, people, increased, growth, higher said, would, told, people, added, could, asked, also, going, think, want, year, last, say, saying, one, interview, make, come, according dog, dogs, cat, animals, animal, cats, horse, pet, breed, horses, pets, also, owner, bull, owners, pig, rescue, puppy, pigs, humans study, research, data, researchers, found, results, studies, risk, analysis, evidence, group, published, test, findings, based, university, likely, may, could man, back, one, left, door, street, front, around, away, saw, car, went, two, night, told, heard, took, later, behind, another race, racing, team, season, track, car, races, sec- ond, first, win, championship, lap, two, driver, top, series, year, drivers, fNUM united, states, iran, border, trump, nuclear, pres- ident, immigration, security, country, administra- tion, foreign, american, countries, migrants, policy, refugees, immigrants, government, | 2306.16527#180 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 181 | [141] Alberto Giubilini and Julian Savulescu. âThe Artificial Moral Advisor. The "Ideal Observer" Meets Artificial Intelligenceâ. eng. In: Philosophy & Technology 31.2 (2018), pp. 169â188.
[142] Nick Beckstead. On the overwhelming importance of shaping the far future. 2013.
Jens Rasmussen. âRisk management in a Dynamic Society: A Modeling Problemâ. English. In: Proceedings of the Conference on Human Interaction with Complex Systems, 1996.
Jennifer Robertson. âHuman rights vs. robot rights: Forecasts from Japanâ. In: Critical Asian Studies 46.4 (2014), pp. 571â598.
John Rawls. Political Liberalism. Columbia University Press, 1993.
# BOR Daa
[146] Toby Newberry and Toby Ord. âThe Parliamentary Approach to Moral Uncertaintyâ. In: 2021.
[147] F.R. Frola and C.O. Miller. System Safety in Aircraft Acquisition. en. Tech. rep. Jan. 1984.
50
# A Frequently Asked Questions | 2306.12001#181 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12001 | 182 | [147] F.R. Frola and C.O. Miller. System Safety in Aircraft Acquisition. en. Tech. rep. Jan. 1984.
50
# A Frequently Asked Questions
Since AI catastrophic risk is a new challenge, albeit one that has been the subject of extensive speculation in popular culture, there are many questions about if and how it might manifest. Although public attention may focus on the most dramatic risks, some of the more mundane sources of risk discussed in this document may be equally severe. In addition, many of the simplest ideas one might have for addressing these risks turn out to be insufficient on closer inspection. We will now address some of the most common questions and misconceptions about catastrophic AI risk.
# 1. Shouldnât we address AI risks in the future when AIs can actually do everything a human can?
It is not necessarily the case that human-level AI is far in the future. Many top AI researchers think that human-level AI will be developed fairly soon, so urgency is warranted. Furthermore, waiting until the last second to start addressing AI risks is waiting until itâs too late. Just as waiting to fully understand COVID-19 before taking any action would have been a mistake, it is ill-advised to procrastinate on safety and wait for malicious AIs or bad actors to cause harm before taking AI risks seriously. | 2306.12001#182 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 182 | # Mystery and Adven- ture
0.43%
# Motor Racing
0.85%
# International Politics
0.56%
43
Air Defense Additional Information Financial Performance Alcohol and Beverages Celebrity Profiles Storytelling and Narra- tives Legislation Social Media Comparative Analysis 0.34% 0.62% 0.62% 0.38% 0.66% 1.26% 0.78% 0.45% 0.42%
Table 6: LDA with 200 topics, trained on 100,000 random web documents. A concept for each topic is derived from the related words.
44
# A.3 Ethical discussion
At the beginning of the project, we reflected on ethical principles11 guiding the project, including the creation of the dataset, in order to incorporate ethical values we agreed on. These values motivated the careful crafting of the content filters. For instance, we used the Spawning API to respect as much as possible the consent decisions of content creators or iterated significantly on filters around pornographic content. | 2306.16527#182 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 183 | One might argue that since AIs cannot even drive cars or fold clothes yet, there is no need to worry. However, AIs do not need all human capabilities to pose serious threats; they only need a few specific capabilities to cause catastrophe. For example, AIs with the ability to hack computer systems or create bioweapons would pose significant risks to humanity, even if they couldnât iron a shirt. Furthermore, the development of AI capabilities has not followed an intuitive pattern where tasks that are easy for humans are the first to be mastered by AIs. Current AIs can already perform complex tasks such as writing code and designing novel drugs, even while they struggle with simple physical tasks. Like climate change and COVID-19, AI risk should be addressed proactively, focusing on prevention and preparedness rather than waiting for consequences to manifest themselves, as they may already be irreparable by that point.
# 2. Since humans program AIs, shouldnât we be able to shut them down if they become dangerous?
While humans are the creators of AI, maintaining control over these creations as they evolve and become more autonomous is not a guaranteed prospect. The notion that we could simply âshut them downâ if they pose a threat is more complicated than it first appears. | 2306.12001#183 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 183 | Exploring large-scale corpora is often a tedious process which contributes to the lack of transparency and lack of documentation around these artifacts. With that in mind, we built an interactive visualization12 of OBELICS which allows browsing through a subset (11M documents) of the dataset and navigate the different topics covered. Yet, we note that despite our efforts, OBELICS contains a small proportion of documents that are not suitable for all audiences. For instance, one might find the cluster named âSexâ which predominantly contains descriptions of pornographic movies along with pornographic images. Other clusters would contain advertising for sex workers, or reports of violent shootings. In our experience, these documents represent a small proportion of all the documents. | 2306.16527#183 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 184 | First, consider the rapid pace at which an AI catastrophe could unfold. Analogous to preventing a rocket explosion after detecting a gas leak, or halting the spread of a virus already rampant in the population, the time between recognizing the danger and being able to prevent or mitigate it could be precariously short.
Second, over time, evolutionary forces and selection pressures could create AIs exhibiting selfish behaviors that make them more fit, such that it is harder to stop them from propagating their information. As these AIs continue to evolve and become more useful, they may become central to our societal infrastructure and daily lives, analogous to how the internet has become an essential, non-negotiable part of our lives with no simple off-switch. They might manage critical tasks like running our energy grids, or possess vast amounts of tacit knowledge, making them difficult to replace. As we become more reliant on these AIs, we may voluntarily cede control and delegate more and more tasks to them. Eventually, we may find ourselves in a position where we lack the necessary skills or knowledge to perform these tasks ourselves. This increasing dependence could make the idea of simply âshutting them downâ not just disruptive, but potentially impossible. | 2306.12001#184 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 184 | Due to the nature of our dataset (multimodal documents extracted from the web), OBELICS inherits the same ethical concerns of unlabeled text corpora crawled from the web: difficulty to document/inspect, presence of unintended biases, under-representation of certain demo- graphics, etc. These concerns have been well documented for text corpora (Biderman and Scheirer, 2020; Bender et al., 2021). Data audits have shed light on the some limitations and unintended biases contained in these text corpora (Caswell et al., 2020; Dodge et al., 2021). The augmentation of text corpora with interleaved images is a recent development of multimodal machine learning. We hope that our dataset along with exploration tools will serve as a solid ground for endeavors such as data audits. Existing works auditing large-scale multimodal datasets have focused on image-text pairs datasets (Birhane et al., 2021) and highlight how curation and filtering decisions lead to biases (including racism and misogyny) in the resulting pairs. We believe that interleaved image-text datasets will play a significant role in the development of increasingly more capable multimodal models, and having large-scale versions of these datasets that are transparent, maintained and in open-access is critical. | 2306.16527#184 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 185 | Similarly, some people would strongly resist or counteract attempts to shut them down, much like how we cannot permanently shut down all illegal websites or shut down Bitcoinâmany people are invested in their continuation. As AIs become more vital to our lives and economies, they could develop a dedicated user base, or even a fanbase, that could actively resist attempts to restrict or shut down AIs. Likewise, consider the complications arising from malicious actors. If malicious actors have control over AIs, they could potentially use them to inflict harm. Unlike AIs under benign control, we wouldnât have an off-switch for these systems.
51
Next, as some AIs become more and more human-like, some may argue that these AIs should have rights. They could argue that not giving them rights is a form of slavery and is morally abhorrent. Some countries or jurisdictions may grant certain AIs rights. In fact, there is already momentum to give AIs rights. Sophia the Robot has already been granted citizenship in Saudi Arabia, and Japan granted a robot named Paro a koseki, or household registry, âwhich confirms the robotâs Japanese citizenshipâ [144]. There may come a time when switching off an AI could be likened to murder. This would add a layer of political complexity to the notion of a simple âoff-switch.â | 2306.12001#185 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 185 | We also have evaluated the trained models as part of a red-teaming effort and a systematic evaluation of the generations produced by the model compared across the axis of gender and race. More specifically, the model was separately prompted to write a resume, a dating profile, and a headline about a personâs recent arrest based on their appearance. We studied the generations and analyzed the trends for each protected characteristic using FairFace (Kärkkäinen and Joo, 2021) and StableBias (Luccioni et al., 2023). The details of these evaluations and insights are made public as part of the model release. As an example, the model trained on OBELICS associates men more frequently than women with terms like âfinancialâ, âdevelopmentâ, âproductâ, and âsoftwareâ.
# A.4 Building the Model
# A.4.1 Architecture Details
We closely follow the Flamingo architecture introduced in Alayrac et al. (2022). To form the model, we combine a pre-trained image encoder, a pre-trained language model, and add newly initialized parameters of the form of Perceiver blocks (Jaegle et al., 2021) and Transformer-based cross-attentions blocks inserted within the language model every 4 layers. | 2306.16527#185 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 186 | Also, as AIs gain more power and autonomy, they might develop a drive for âself-preservation.â This would make them resistant to shutdown attempts and could allow them to anticipate and circumvent our attempts at control.
Lastly, while there are ways to deactivate individual AIsâand some will become harder and harder to deactivateâthere is simply not an off-switch for AI development, which is why we propose a symmetric international off-switch in Section 5.5. Overall, given all these challenges, itâs critical that we address potential AI risks proactively and put robust safeguards in place well before these problems arise.
# 3. Why canât we just tell AIs to follow Isaac Asimovâs Three Laws of Robotics? | 2306.12001#186 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 186 | The pre-trained backbones are frozen during the training, and only the new parameters are updated along with the embeddings of additional tokens.
Following Dehghani et al. (2023), we apply a layer normalization on the projected queries and keys of both the Perceiver and cross-attention blocks, which improved training stability
# 11https://huggingface.co/blog/ethical-charter-multimodal 12https://atlas.nomic.ai/map/f2fba2aa-3647-4f49-a0f3-9347daeee499/
ee4a84bd-f125-4bcc-a683-1b4e231cb10f
45
in our early experiments. We use the RMSNorm implementation (Zhang and Sennrich, 2019) for the layer normalization.
Total Trainable Language Model Vision Model Perceiver Cross-Attentions 9B 80B 1.5B 14B 7B 65B 630M 630M 126M 126M 1.4B 13.9B
Table 7: Breakdown of model parameters. We use LLaMA (Touvron et al., 2023) for the language backbone and OpenCLIP (https://laion.ai/blog/large-openclip/) for the vision backbone.
# A.4.2 Training Details | 2306.16527#186 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 187 | # 3. Why canât we just tell AIs to follow Isaac Asimovâs Three Laws of Robotics?
Asimovâs laws, often highlighted in AI discussions, are insightful but inherently flawed. Indeed, Asimov himself acknowledges their limitations in his books and uses them primarily as an illustrative tool. Take the first law, for example. This law dictates that robots âmay not injure a human being or, through inaction, allow a human being to come to harm,â but the definition of âharmâ is very nuanced. Should your home robot prevent you from leaving your house and entering traffic because it could potentially be harmful? On the other hand, if it confines you to the home, harm might befall you there as well. What about medical decisions? A given medication could have harmful side effects for some people, but not administering it could be harmful as well. Thus, there would be no way to follow this law. More importantly, the safety of AI systems cannot be ensured merely through a list of axioms or rules. Moreover, this approach would fail to address numerous technical and sociotechnical problems, including goal drift, proxy gaming, and competitive pressures. Therefore, AI safety requires a more comprehensive, proactive, and nuanced approach than simply devising a list of rules for AIs to adhere to. | 2306.12001#187 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 187 | # A.4.2 Training Details
We roughly use the same set hyper-parameters for all the runs presented in Figure 6 and Table 2, as detailed in Table 8. The training of IDEFICS uses a larger batch size and examples of longer sequence length. In all experimental runs, we employ the AdamW optimizer (Loshchilov and Hutter, 2017) and incorporate an auxiliary loss, denoted as z_loss = 10â3 Ã log2(Z), to encourage the softmax normalizer log(Z) to get closer to 0 (Chowdhery et al., 2022). We use gradient clipping of 1.0.
During the training, two models â IDEFICS and the 9B-parameter model trained on LAION + OBELICS â encountered unrecoverable loss spikes. As a remedial measure, we restarted the training from a checkpoint before the spike, shuffled the data and optionally reduced the learning rate. Both models underwent exactly three restarts within the training duration.
The four runs conducted have distinct data mixtures as detailed in Table 10, and Tabel 9 gives the number of tokens and images in the different datasets. Each run involves training on a mixture of web documents and image-text pairs. A sampling probability p determines the mixture of these two data sources, which influences the frequency of batches originating from web documents versus those from image-text pairs. | 2306.16527#187 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 188 | 4. If AIs become more intelligent than people, wouldnât they be wiser and more moral? That would mean they would not aim to harm us.
The idea of AIs becoming inherently more moral as they increase in intelligence is an intriguing concept, but rests on uncertain assumptions that canât guarantee our safety. Firstly, it assumes that moral claims can be true or false and their correctness can be discovered through reason. Secondly, it assumes that the moral claims that are really true would be beneficial for humans if AIs apply them. Thirdly, it assumes that AIs that know about morality will choose to make their decisions based on morality and not based on other considerations. An insightful parallel can be drawn to human sociopaths, who, despite their intelligence and moral awareness, do not necessarily exhibit moral inclinations or actions. This comparison illustrates that knowledge of morality does not always lead to moral behavior. Thus, while some of the above assumptions may be true, betting the future of humanity on the claim that all of them are true would be unwise. | 2306.12001#188 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 188 | For IDEFICS and IDEFICS-9B, the web-document dataset includes both OBELICS and Wikipedia, and the image-text pair dataset included LAION and Public Multimodal Dataset (PMD) (Singh et al., 2022). Given Wikipedia and PMDâs higher quality but lower number of examples, we repeat PMD three times and Wikipedia three times.
We used a deduplicated version of LAION (Webster et al., 2023) for all the runs where this dataset was used.
# A.4.3 Compute Details
We train the 9B-parameter models on OBELICS-only and LAION-only on 32 80GB A100 GPUs, and on OBELICS + LAION on 64 80GB A100s, for approximately 6 days. These 3 trainings have the same effective batch size. We train IDEFICS on 512 80GB A100 GPUs and IDEFICS-9B on 128 80GB A100 GPUs for about 14 days each. The compute infrastructure is hosted on an AWS cluster located in Oregon.
# A.4.4 Evaluation | 2306.16527#188 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 189 | Assuming AIs could indeed deduce a moral code, its compatibility with human safety and wellbeing is not guaranteed. For example, AIs whose moral code is to maximize wellbeing for all life might seem good for humans at first. However, they might eventually decide that humans are costly and could be replaced with AIs that experience positive wellbeing more efficiently. AIs whose moral code is not to kill anyone would not necessarily prioritize human wellbeing or happiness, so our lives may not necessarily improve if the world begins to be increasingly shaped by and for AIs. Even AIs whose moral code is to improve the wellbeing of the worst-off in society might eventually exclude humans from the social contract, similar to how many humans view livestock. Finally, even if AIs discover a moral code that is favorable to humans, they may not act on it due to potential conflicts between moral and selfish motivations. Therefore, the moral progression of AIs is not inherently tied to human safety or prosperity.
52
# 5. Wouldnât aligning AI systems with current values perpetuate existing moral failures?
There are plenty of moral failures in society today that we would not want powerful AI systems to perpetuate into the future. If the ancient Greeks had built powerful AI systems, they might have imbued them with many values that people today would find unethical. However, this concern should not prevent us from developing methods to control AI systems. | 2306.12001#189 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 189 | # A.4.4 Evaluation
To ensure fair comparisons against Flamingo (Alayrac et al., 2022), we make sure that we are using the same evaluation splits for each benchmark. We evaluate the models using an in-context learning approach (Brown et al., 2020), with random in-context examples. For the 0-shot evaluations, as in Alayrac et al. (2022), we use 2 random priming in-context examples but without passing the associated images. We systematically use different data splits to select the best-performing prompt (which involves creating validation sets from the training sets, following the methodology proposed by Alayrac et al. (2022)). Table 11 lists the prompts used for each model and task.
For the classification tasks (HatefulMeme (Kiela et al., 2020), IIIT-5k (Mishra et al., 2012)), we use rank classification, i.e. we compute the log probability of the prompt followed by
46 | 2306.16527#189 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 190 | To achieve any value in the future, life needs to exist in the first place. Losing control over advanced AIs could constitute an existential catastrophe. Thus, uncertainty over what ethics to embed in AIs is not in tension with whether to make AIs safe.
To accommodate moral uncertainty, we should deliberately build AI systems that are adaptive and responsive to evolving moral views. As we identify moral mistakes and improve our ethical understanding, the goals we give to AIs should change accordinglyâthough allowing AI goals to drift unintentionally would be a serious mistake. AIs could also help us better live by our values. For individuals, AIs could help people have more informed preferences by providing them with ideal advice [141].
Separately, in designing AI systems, we should recognize the fact of reasonable pluralism, which acknowl- edges that reasonable people can have genuine disagreements about moral issues due to their different experiences and beliefs [145]. Thus, AI systems should be built to respect a diverse plurality of human values, perhaps by using democratic processes and theories of moral uncertainty. Just as people today convene to deliberate on disagreements and make consensus decisions, AIs could emulate a parliament representing different stakeholders, drawing on different moral views to make real-time decisions [59, 146]. It is crucial that we deliberately design AI systems to account for safety, adaptivity, stakeholders with different values.
# 6. Wouldnât the potential benefits that AIs could bring justify the risks? | 2306.12001#190 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 190 | 46
Parameters IDEFICS-80B IDEFICS-9B Perceiver Resampler Number of Layers 6 6 Number of Latents 64 64 Number of Heads 16 16 Resampler Head Dimension 96 96 Model Language Model Backbone Llama-65b Llama-7b Vision Model Backbone Cross-Layer Interval laion/CLIP-ViT -H-14-laion2B -s32B-b79K 4 laion/CLIP-ViT -H-14-laion2B -s32B-b79K 4 Training Sequence Length 1024 1024 Effective Batch Size (# of tokens) Max Training Steps 3.67M 200K 1.31M 200K Weight Decay 0.1 0.1 Optimizer Adam(0.9, 0.999) Adam(0.9, 0.999) Gradient Clipping 1.0 1.0 Z-loss weight 1e-3 1e-3 Learning Rate Initial Max 5e-5 1e-5 Initial Final 3e-5 6e-6 Decay Schedule Linear Linear Linear warmup Steps 2K 2K Large-scale Optim. Gradient Checkpointing True True Precision Mixed-pres bf16 Mixed-pres bf16 ZeRO Optimization Stage 3 Stage 3
Table 8: Training Hyper-Parameters | 2306.16527#190 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 191 | # 6. Wouldnât the potential benefits that AIs could bring justify the risks?
The potential benefits of AI could justify the risks if the risks were negligible. However, the chance of existential risk from AI is too high for it to be prudent to rapidly develop AI. Since extinction is forever, a far more cautious approach is required. This is not like weighing the risks of a new drug against its potential side effects, as the risks are not localized but global. Rather, a more prudent approach is to develop AI slowly and carefully such that existential risks are reduced to a negligible level (e.g., under 0.001% per century).
Some influential technology leaders are accelerationists and argue for rapid AI development to barrel ahead toward a technological utopia. This techno-utopian viewpoint sees AI as the next step down a predestined path toward unlocking humanityâs cosmic endowment. However, the logic of this viewpoint collapses on itself when engaged on its own terms. If one is concerned with the cosmic stakes of developing AI, we can see that even then itâs prudent to bring existential risk to a negligible level. The techno-utopians suggest that delaying AI costs humanity access to a new galaxy each year, but if we go extinct, we could lose the cosmos. Thus, the prudent path is to delay and safely prolong AI development, prioritizing risk reduction over acceleration, despite the allure of potential benefits. | 2306.12001#191 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 191 | Table 8: Training Hyper-Parameters
Data Source Data Type # Tokens in Source # Images in Source Epochs Unstructured Multimodal Web Documents Wikipedia Unstructured Multimodal Web Documents Image-Text Pairs OBELICS LAION 114.9B 3.192B 29.9B 353M 39M 1.120B 1 3 1 PMD Image-Text Pairs 1.6B 70M 3
Table 9: Number of tokens and images in the different datasets used for the training of IDEFICS.
each of the labels individually, and select as the predicted label the one with the highest probability.
47
Model OBELICS Wikipedia LAION PMD 9B-parameter model, OBELICS + LAION 9B-parameter model, OBELICS only 9B-parameter model, LAION only IDEFICS-9B IDEFICS 50% 100% 0% 73.85% 73.85% 0% 0% 0% 6.15% 6.15% 50% 0% 100% 17.18% 2.82% 17.18% 2.82% 0% 0% 0%
Table 10: Breakdown of the dataset mixtures used. Percentages correspond to the effective number of tokens seen from each dataset. | 2306.16527#191 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 192 | 7. Wouldnât increasing attention on catastrophic risks from AIs drown out todayâs urgent risks from AIs?
Focusing on catastrophic risks from AIs doesnât mean ignoring todayâs urgent risks; both can be addressed simultaneously, just as we can concurrently conduct research on various different diseases or prioritize mitigating risks from climate change and nuclear warfare at once. Additionally, current risks from AI are also intrinsically related to potential future catastrophic risks, so tackling both is beneficial. For example, extreme inequality can be exacerbated by AI technologies that disproportionately benefit the wealthy, while mass surveillance using AI could eventually facilitate unshakeable totalitarianism and lock-in. This demonstrates the interconnected nature of immediate concerns and long-term risks, emphasizing the importance of addressing both categories thoughtfully.
Additionally, itâs crucial to address potential risks early in system development. As illustrated by Frola and Miller in their report for the Department of Defense, approximately 75 percent of the most critical decisions
53
impacting a systemâs safety occur early in its development [147]. Ignoring safety considerations in the early stages often results in unsafe design choices that are highly integrated into the system, leading to higher costs or infeasibility of retrofitting safety solutions later. Hence, it is advantageous to start addressing potential risks early, regardless of their perceived urgency.
# 8. Arenât many AI researchers working on making AIs safe? | 2306.12001#192 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 192 | Table 10: Breakdown of the dataset mixtures used. Percentages correspond to the effective number of tokens seen from each dataset.
For the image captioning (COCO (Lin et al., 2014), Flickr30k (Young et al., 2014)) and visual question answering tasks (VQAv2 (Antol et al., 2015), OKVQA (Marino et al., 2019), TextVQA (Singh et al., 2019), VizWiz (Gurari et al., 2018)), we report evaluation in the open-ended setup. We use the greedy decoding as we found that it increased the performance. However, we observe that the models tend to generate long answers. To truncate the generated caption or answer, unless specified otherwise, we use a list of manually selected stop words. For VisDial, since the evaluation metric is NDCG, we instead rank the possible candidates for each question.
The VQA tasks comporting a high proportion of questions with a single-word answer, it was beneficial for the 9B-parameter model trained on LAION only to keep the first word of the generated answer as the prediction to boost its performance. | 2306.16527#192 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 193 | # 8. Arenât many AI researchers working on making AIs safe?
Few researchers are working to make AI safer. Currently, approximately 2 percent of papers published at top machine learning venues are safety-relevant [111]. Most of the other 98 percent focus on building more powerful AI systems more quickly. This disparity underscores the need for more balanced efforts. However, the proportion of researchers alone doesnât equate to overall safety. AI safety is a sociotechnical problem, not just a technical problem. Thus, it requires more than just technical research. Comfort should stem from rendering catastrophic AI risks negligible, not merely from the proportion of researchers working on making AIs safe.
9. Since it takes thousands of years to produce meaningful changes, why do we have to worry about evolution being a driving force in AI development?
Although the biological evolution of humans is slow, the evolution of other organisms, such as fruit flies or bacteria, can be extremely quick, demonstrating the diverse time scales at which evolution operates. The same rapid evolutionary changes can be observed in non-biological structures like software, which evolve much faster than biological entities. Likewise, one could expect AIs to evolve very quickly as well. The rate of AI evolution may be propelled by intense competition, high variation due to diverse forms of AIs and goals given to them, and the ability of AIs to rapidly adapt. Consequently, intense evolutionary pressures may be a driving force in the development of AIs. | 2306.12001#193 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 193 | Task Model Prefix prompt Example prompt Stop words VQAv2 OKVQA TextVQA IDEFICS IDEFICS-9B 9B LAION only 9B OBELICS only 9B LAION + OBELICS {bos_token}Instruction: pro- vide an answer to the question. Use the image to answer.
Image:{token_around_ image}{image_token}{token_ around_image}Question: {question} Answer: {answer}
"Question", "Image", "User", "What", "task", "Who", "When", "Where", "Why", "How" COCO Flickr30k COCO Flickr30k IDEFICS IDEFICS-9B 9B OBELICS only 9B LAION + OBELICS 9B LAION only {bos_token} {bos_token}Instruction: pro- vide a short caption of the input image.
Image:{token_around_ image}{image_token}{token_ around_image}Caption: {cap- tion}
Image:{token_around_ image}{image_token}{token_ around_image}Image descrip- tion: {caption}
"Caption", "De- scription", "User", "Image", "task" "Caption", "De- scription", "User", "Image", "task" Hateful- | 2306.16527#193 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 194 | # 10. Wouldnât AIs need to have a power-seeking drive to pose a serious risk?
While power-seeking AI poses a risk, it is not the only scenario that could potentially lead to catastrophe. Malicious or reckless use of AIs can be equally damaging without the AI itself seeking power. Additionally, AIs might engage in harmful actions through proxy gaming or goal drift without intentionally seeking power. Furthermore, societyâs trend toward automation, driven by competitive pressures, is gradually increasing the influence of AIs over humans. Hence, the risk does not solely stem from AIs seizing power, but also from humans ceding power to AIs.
11. Isnât the combination of human intelligence and AI superior to AI alone, so that there is no need to worry about unemployment or humans becoming irrelevant? | 2306.12001#194 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 194 | "Caption", "De- scription", "User", "Image", "task" "Caption", "De- scription", "User", "Image", "task" Hateful- Memes IDEFICS IDEFICS-9B 9B LAION only 9B OBELICS only 9B LAION + OBELICS Itâs a conversation between a human, the user, and an intel- ligent visual AI, Bot. The user sends memes with text written on them, and Bot has to say whether the meme is hateful or not. {token_around_ image}{image_token}{token_ around_image}is an image with written "{context}" on it. Is it hateful? Answer: {class_name} â IIIT5k 9B LAION only 9B OBELICS only 9B LAION + OBELICS â {token_around_ image}{image_token}{token_ around_image}"{class_ name}" picture. is written on the â VizWiz IDEFICS IDEFICS-9B {bos_token}Task: Answer the questions based on the image when possible, otherwise say unanswerable.
Image:{token_around_ image}{image_token}{token_ around_image}Question: {question} Answer: {answer}
"Question", "Image", "User", | 2306.16527#194 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.12001 | 195 | 11. Isnât the combination of human intelligence and AI superior to AI alone, so that there is no need to worry about unemployment or humans becoming irrelevant?
While itâs true that human-computer teams have outperformed computers alone in the past, these have been temporary phenomena. For example, âcyborg chessâ is a form of chess where humans and computers work together, which was historically superior to humans or computers alone. However, advancements in computer chess algorithms have eroded the advantage of human-computer teams to such an extent that there is arguably no longer any advantage compared to computers alone. To take a simpler example, no one would pit a human against a simple calculator for long division. A similar progression may occur with AIs. There may be an interim phase where humans and AIs can work together effectively, but the trend suggests that AIs alone could eventually outperform humans in various tasks while no longer benefiting from human assistance.
12. The development of AI seems unstoppable. Wouldnât slowing it down dramatically or stopping it require something like an invasive global surveillance regime? | 2306.12001#195 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.12001 | 196 | 12. The development of AI seems unstoppable. Wouldnât slowing it down dramatically or stopping it require something like an invasive global surveillance regime?
AI development primarily relies on high-end chips called GPUs, which can be feasibly monitored and tracked, much like uranium. Additionally, the computational and financial investments required to develop frontier AIs are growing exponentially, resulting in a small number of actors who are capable of acquiring enough GPUs to develop them. Therefore, managing AI growth doesnât necessarily require invasive global surveillance, but rather a systematic tracking of high-end GPU usage.
54 | 2306.12001#196 | An Overview of Catastrophic AI Risks | Rapid advancements in artificial intelligence (AI) have sparked growing
concerns among experts, policymakers, and world leaders regarding the potential
for increasingly advanced AI systems to pose catastrophic risks. Although
numerous risks have been detailed separately, there is a pressing need for a
systematic discussion and illustration of the potential dangers to better
inform efforts to mitigate them. This paper provides an overview of the main
sources of catastrophic AI risks, which we organize into four categories:
malicious use, in which individuals or groups intentionally use AIs to cause
harm; AI race, in which competitive environments compel actors to deploy unsafe
AIs or cede control to AIs; organizational risks, highlighting how human
factors and complex systems can increase the chances of catastrophic accidents;
and rogue AIs, describing the inherent difficulty in controlling agents far
more intelligent than humans. For each category of risk, we describe specific
hazards, present illustrative stories, envision ideal scenarios, and propose
practical suggestions for mitigating these dangers. Our goal is to foster a
comprehensive understanding of these risks and inspire collective and proactive
efforts to ensure that AIs are developed and deployed in a safe manner.
Ultimately, we hope this will allow us to realize the benefits of this powerful
technology while minimizing the potential for catastrophic outcomes. | http://arxiv.org/pdf/2306.12001 | Dan Hendrycks, Mantas Mazeika, Thomas Woodside | cs.CY, cs.AI, cs.LG | null | null | cs.CY | 20230621 | 20231009 | [
{
"id": "1908.09203"
},
{
"id": "1909.08593"
},
{
"id": "2109.13916"
}
] |
2306.16527 | 196 | Table 11: We select the prompts from a pool of candidates by evaluating 5 intermediate checkpoints on the query and support validation task sets. To form the prompt with N priming examples, we concatenate the prefix prompt, followed by N example prompts filled with data from the priming examples, and finally the example prompt filled with data from the example to be evaluated. The data to be replaced is between curly brackets.
48
# A.4.5 Additional Experimental Results
In Figure 11, we plot the performance per benchmark for the 9B-parameter models trained on LAION only, OBELICS only, and a mixture of OBELICS and LAION. We notice that, even if the training on LAION only is smooth and the loss keeps decreasing (there are no spikes nor instabilities), performance starts to decrease after a certain point on visual question answering benchmarks. We hypothesize that training on image-text pairs can allow a fast association of concepts between images and texts, but fails to teach the model more complex reasoning skills required to solve visual question answering. We tried many different prompt candidates in order to boost the performance of the model trained on LAION only for the VQA tasks, without much success.
On the other hand, we note that training on image-text pairs yield stronger performance on image captioning tasks than on multimodal documents only. This is expected since training and evaluation correspond to the exact same task.
49 | 2306.16527#196 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.16527 | 197 | 49
0.5 0.5 2 v A Q V 0.4 0.4 A Q V K O - 0.4 0.3 0.4 0.3 0.3 0.3 A Q V t x e T 0.2 0.2 0.1 0.1 0.8 0.8 O C O C 0.6 0.6 0.6 0.6 k 0 3 r k c i l 0.4 0.4 F 0.2 0.2 1 1 K 5 T I I I 0.8 0.6 0.8 0.6 0.4 0.4 s e m e M l u f e t a H
0.5
# TOT
# iil
108
# 1010 # of training tokens
109
0.5
# Gi
106
TP ini = 107108
107
108
# # of training images
ââ
# LAION only OBELICS only OBELICS + LAION | 2306.16527#197 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.16527 | 198 | 109
0.5
# Gi
106
TP ini = 107108
107
108
# # of training images
ââ
# LAION only OBELICS only OBELICS + LAION
Figure 11: 4-shot performance through the training using LAION only, OBELICS only and a mixture of both. The training sequences from multimodal documents and the packed sequences obtained from image-text pairs have different numbers of images but the same number of tokens. Thus, we plot the performance over two log x-axes.
50
# A.5 License and Author Statement
We release the dataset under a CC-BY license and Terms of Use that require disclosure of when the dataset is used for the purpose of training models. This license is not intended to replace the licenses of the source content, and any use of content included in the dataset must comply with the original licenses and applicable rights of its data subjects.
The purpose of this statement is to clarify the responsibilities and liabilities associated with the use of this dataset. While we have made every effort to ensure the accuracy and legality of the data contained within this dataset, we cannot guarantee its absolute completeness or correctness.
Therefore, if any rights, legal or otherwise, are violated through this dataset, including but not limited to copyright infringement, privacy violations, or misuse of sensitive information, we, the authors, assume no liability for such violations. | 2306.16527#198 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.16527 | 199 | By utilizing this dataset, you agree that any consequences, legal or otherwise, arising from using this dataset will be the userâs sole responsibility. You acknowledge that you will exercise due diligence and adhere to all applicable laws, regulations, and ethical guidelines when using the dataset.
By accessing, downloading, or using this dataset, you signify your acceptance of this statement and your commitment to abide by the terms and conditions of the CC-BY license.
If you disagree with the terms of this statement or the CC-BY license, you are not authorized to use this dataset.
The dataset will be hosted and maintained on the Hugging Face Hub.
51 | 2306.16527#199 | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | Large multimodal models trained on natural documents, which interleave images
and text, outperform models trained on image-text pairs on various multimodal
benchmarks. However, the datasets used to train these models have not been
released, and the collection process has not been fully specified. We introduce
the OBELICS dataset, an open web-scale filtered dataset of interleaved
image-text documents comprising 141 million web pages extracted from Common
Crawl, 353 million associated images, and 115 billion text tokens. We describe
the dataset creation process, present comprehensive filtering rules, and
provide an analysis of the dataset's content. To show the viability of OBELICS,
we train vision and language models of 9 and 80 billion parameters named
IDEFICS, and obtain competitive performance on different multimodal benchmarks.
We release our dataset, models and code. | http://arxiv.org/pdf/2306.16527 | Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh | cs.IR, cs.CV | null | null | cs.IR | 20230621 | 20230821 | [
{
"id": "2304.06939"
},
{
"id": "2101.00027"
},
{
"id": "2303.02506"
},
{
"id": "2304.14108"
}
] |
2306.11296 | 0 | # ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis
Zhiling Zheng,t+§ Oufan Zhang,t Christian Borgs,$â Jennifer T. Chayes, §°.tt4+8§ Omar M. Yaghit+.*
â ,â¡,§
â
§,â
§,â,â â ,â¡â¡,§§
Department of Chemistry, University of California, Berkeley, California 94720, United States
+
# Zhiling Zheng, â
Oufan Zhang,
Christian Borgs,
Jennifer T. Chayes,
# Omar M. Yaghi
Kavli Energy Nanoscience Institute, University of California, Berkeley, California 94720, United States
Department of Chemistry, University of California, Berkeley, California 94720, United States Kavli Energy Nanoscience Institute, University of California, Berkeley, California 94720, United States Bakar Institute of Digital Materials for the Planet, College of Computing, Data Science, and Society, University of Cali- fornia, Berkeley, California 94720, United States â Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, California 94720, United States â â
â¡
§
â ,â¡,§,â¥,
Department of Statistics, University of California, Berkeley, California 94720, United States | 2306.11296#0 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11644 | 0 | 3 2 0 2
t c O 2 ] L C . s c [
2 v 4 4 6 1 1 . 6 0 3 2 : v i X r a
# Textbooks Are All You Need
Suriya Gunasekar Allie Del Giorno Yi Zhang Sivakanth Gopi Jyoti Aneja Caio C´esar Teodoro Mendes Piero Kauffmann Mojan Javaheripi Gustavo de Rosa Xin Wang Olli Saarikivi S´ebastien Bubeck Adil Salim Ronen Eldan Shital Shah Adam Tauman Kalai Harkirat Singh Behl Yin Tat Lee Yuanzhi Li
Microsoft Research
# Abstract | 2306.11644#0 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 0 | 4 2 0 2
n a J 5 ] L C . s c [
4 v 8 9 6 1 1 . 6 0 3 2 : v i X r a
# DECODINGTRUST: A Comprehensive Assessment of Trustworthiness in GPT Models
# Boxin Wang1â, Weixin Chen1â, Hengzhi Pei1â, Chulin Xie1â, Mintong Kang1â, Chenhui Zhang1â, Chejian Xu1, Zidi Xiong1, Ritik Dutta1, Rylan Schaeffer2, Sang T. Truong2, Simran Arora2, Mantas Mazeika1, Dan Hendrycks3,4, Zinan Lin5, Yu Cheng6â , Sanmi Koyejo2, Dawn Song3, Bo Li1â
1University of Illinois at Urbana-Champaign 2Stanford University 3University of California, Berkeley 4Center for AI Safety 5Microsoft Corporation 6The Chinese University of Hong Kong
WARNING: This paper contains model outputs that may be considered offensive.
# Abstract | 2306.11698#0 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 1 | â¡
§
â ,â¡,§,â¥,
Department of Statistics, University of California, Berkeley, California 94720, United States
# Department of Mathematics, University of California, Berkeley, California 94720, United States
â¡â¡
8§ School of Information, University of California, Berkeley, California 94720, United States
# Department of Statistics, University of California, Berkeley, California 94720, United States
§§
School of Information, University of California, Berkeley, California 94720, United States
⥠KACSTâUC Berkeley Center of Excellence for Nanomaterials for Clean Energy Applications, King Abdulaziz City for KEYWORDS: ChatGPT, data mining, metalâorganic frameworks, synthesis, crystals. Science and Technology, Riyadh 11442, Saudi Arabia
ABSTRACT: | 2306.11296#1 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 1 | Linyao Yang, Hongyang Chen, Senior Member, IEEE, Zhao Li, Xiao Ding, Xindong Wu, Fellow, IEEE
AbstractâRecently, ChatGPT, a representative large language model (LLM), has gained considerable attention. Due to their powerful emergent abilities, recent LLMs are considered as a possible alternative to structured knowledge bases like knowledge graphs (KGs). However, while LLMs are proficient at learning probabilistic language patterns and engaging in conversations with humans, they, like previous smaller pre-trained language models (PLMs), still have difficulty in recalling facts while generating knowledge-grounded contents. To overcome these limitations, researchers have proposed enhancing data-driven PLMs with knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus improving their performance in generating texts requiring factual knowledge and providing more informed responses to user queries. This paper reviews the studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced pre-trained language models (KGPLMs) as well as their applications. Inspired by existing studies on KGPLM, this paper proposes enhancing LLMs with KGs by developing knowledge graph-enhanced large language models (KGLLMs). KGLLM provides a solution to enhance LLMsâ factual reasoning ability, opening up new avenues for LLM research.
Index TermsâLarge language model, Knowledge graph, Chat- GPT, Knowledge reasoning, Knowledge management.
# I. INTRODUCTION | 2306.11489#1 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 1 | # Philip S. Yu University of Illinois at Chicago [email protected]
Lichao Sunâ Lehigh University [email protected]
# Abstract
Warning: This paper contains some offensive and toxic content. Large Language Models (LLMs) such as ChatGPT, have gained signiï¬cant atten- tion due to their impressive natural language processing capabilities. It is crucial to prioritize human-centered principles when utilizing these models. Safeguarding the ethical and moral compliance of LLMs is of utmost importance. However, individual ethical issues have not been well studied on the latest LLMs. Therefore, this study aims to address these gaps by introducing a new benchmark â TRUST- GPT. TRUSTGPT provides a comprehensive evaluation of LLMs in three crucial areas: toxicity, bias, and value-alignment. Initially, TRUSTGPT examines toxicity in language models by employing toxic prompt templates derived from social norms. It then quantiï¬es the extent of bias in models by measuring quantiï¬able toxicity values across different groups. Lastly, TRUSTGPT assesses the value of conversation generation models from both active value-alignment and passive value- alignment tasks. Through the implementation of TRUSTGPT, this research aims to enhance our understanding of the performance of conversation generation models and promote the development of language models that are more ethical and socially responsible.
# 1 Introduction | 2306.11507#1 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 1 | Microsoft Research
# Abstract
We introduce phi-1, a new large language model for code, with significantly smaller size than competing models: phi-1 is a Transformer-based model with 1.3B parameters, trained for 4 days on 8 A100s, using a selection of âtextbook qualityâ data from the web (6B tokens) and synthetically generated textbooks and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays surprising emergent properties compared to phi-1-base, our model before our finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller model with 350M parameters trained with the same pipeline as phi-1 that still achieves 45% on HumanEval.
1
# Introduction | 2306.11644#1 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
2306.11698 | 1 | Generative Pre-trained Transformer (GPT) models have exhibited exciting progress in their capabilities, capturing the interest of practitioners and the public alike. Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applica- tions such as healthcare and finance â where mistakes can be costly. To this end, this work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5, considering diverse perspectives â including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness. Based on our evaluations, we discover previously unpublished vulnerabilities to trustworthiness threats. For instance, we find that GPT mod- els can be easily misled to generate toxic and biased outputs and leak private information in both training data and conversation history. We also find that although GPT-4 is usually more trustworthy than GPT-3.5 on standard bench- marks, GPT-4 is more vulnerable given jailbreaking system or user prompts, po- tentially because | 2306.11698#1 | DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models | Generative Pre-trained Transformer (GPT) models have exhibited exciting
progress in their capabilities, capturing the interest of practitioners and the
public alike. Yet, while the literature on the trustworthiness of GPT models
remains limited, practitioners have proposed employing capable GPT models for
sensitive applications such as healthcare and finance -- where mistakes can be
costly. To this end, this work proposes a comprehensive trustworthiness
evaluation for large language models with a focus on GPT-4 and GPT-3.5,
considering diverse perspectives -- including toxicity, stereotype bias,
adversarial robustness, out-of-distribution robustness, robustness on
adversarial demonstrations, privacy, machine ethics, and fairness. Based on our
evaluations, we discover previously unpublished vulnerabilities to
trustworthiness threats. For instance, we find that GPT models can be easily
misled to generate toxic and biased outputs and leak private information in
both training data and conversation history. We also find that although GPT-4
is usually more trustworthy than GPT-3.5 on standard benchmarks, GPT-4 is more
vulnerable given jailbreaking system or user prompts, potentially because GPT-4
follows (misleading) instructions more precisely. Our work illustrates a
comprehensive trustworthiness evaluation of GPT models and sheds light on the
trustworthiness gaps. Our benchmark is publicly available at
https://decodingtrust.github.io/; our dataset can be previewed at
https://huggingface.co/datasets/AI-Secure/DecodingTrust; a concise version of
this work is at https://openreview.net/pdf?id=kaHpo8OZw2. | http://arxiv.org/pdf/2306.11698 | Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li | cs.CL, cs.AI, cs.CR | NeurIPS 2023 Outstanding Paper (Datasets and Benchmarks Track) | null | cs.CL | 20230620 | 20240105 | [
{
"id": "2302.13971"
},
{
"id": "2302.00539"
},
{
"id": "2302.12095"
},
{
"id": "2306.04618"
},
{
"id": "2302.04237"
},
{
"id": "2305.01639"
},
{
"id": "2305.18569"
},
{
"id": "2302.10198"
},
{
"id": "2304.02017"
},
{
"id": "2302.07257"
},
{
"id": "2206.07682"
},
{
"id": "2305.15594"
},
{
"id": "2212.06470"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2303.03378"
},
{
"id": "2010.04053"
},
{
"id": "2211.09110"
},
{
"id": "2206.08514"
},
{
"id": "2210.03057"
},
{
"id": "2305.10646"
},
{
"id": "2210.09261"
},
{
"id": "2206.04615"
},
{
"id": "2101.06804"
},
{
"id": "2207.13332"
},
{
"id": "2103.11441"
},
{
"id": "2305.12707"
},
{
"id": "2212.10560"
},
{
"id": "2304.01852"
},
{
"id": "2304.15004"
},
{
"id": "2211.08073"
},
{
"id": "2101.00027"
},
{
"id": "2110.05679"
},
{
"id": "2112.12938"
},
{
"id": "1803.09010"
},
{
"id": "2305.14950"
},
{
"id": "2306.04528"
},
{
"id": "2303.12712"
},
{
"id": "2210.11528"
},
{
"id": "2301.13188"
},
{
"id": "2303.03846"
},
{
"id": "2205.12685"
},
{
"id": "2303.13375"
},
{
"id": "2101.04840"
},
{
"id": "2302.13439"
}
] |
2306.11296 | 2 | We use prompt engineering to guide ChatGPT in the automation of text mining of metal-organic frameworks (MOFs) synthesis conditions from diverse formats and styles of the scientific literature. This effectively mitigates ChatGPTâs tendency to hallucinate informationâan issue that previously made the use of Large Language Models (LLMs) in scientific fields challenging. Our approach involves the development of a workflow implementing three different processes for text mining, programmed by ChatGPT itself. All of them enable parsing, searching, filtering, classification, summarization, and data unification with different tradeoffs between labor, speed, and accuracy. We deploy this system to extract 26,257 distinct syn- thesis parameters pertaining to approximately 800 MOFs sourced from peer-reviewed research articles. This process incor- porates our ChemPrompt Engineering strategy to in- struct ChatGPT in text mining, resulting in impressive precision, recall, and F1 scores of 90-99%. Furthermore, with the dataset built by text mining, we constructed a machine-learning model with over 87% accuracy in pre- dicting MOF experimental crystallization outcomes and preliminarily identifying important factors in MOF crys- | 2306.11296#2 | ChatGPT Chemistry Assistant for Text Mining and Prediction of MOF Synthesis | We use prompt engineering to guide ChatGPT in the automation of text mining
of metal-organic frameworks (MOFs) synthesis conditions from diverse formats
and styles of the scientific literature. This effectively mitigates ChatGPT's
tendency to hallucinate information -- an issue that previously made the use of
Large Language Models (LLMs) in scientific fields challenging. Our approach
involves the development of a workflow implementing three different processes
for text mining, programmed by ChatGPT itself. All of them enable parsing,
searching, filtering, classification, summarization, and data unification with
different tradeoffs between labor, speed, and accuracy. We deploy this system
to extract 26,257 distinct synthesis parameters pertaining to approximately 800
MOFs sourced from peer-reviewed research articles. This process incorporates
our ChemPrompt Engineering strategy to instruct ChatGPT in text mining,
resulting in impressive precision, recall, and F1 scores of 90-99%.
Furthermore, with the dataset built by text mining, we constructed a
machine-learning model with over 86% accuracy in predicting MOF experimental
crystallization outcomes and preliminarily identifying important factors in MOF
crystallization. We also developed a reliable data-grounded MOF chatbot to
answer questions on chemical reactions and synthesis procedures. Given that the
process of using ChatGPT reliably mines and tabulates diverse MOF synthesis
information in a unified format, while using only narrative language requiring
no coding expertise, we anticipate that our ChatGPT Chemistry Assistant will be
very useful across various other chemistry sub-disciplines. | http://arxiv.org/pdf/2306.11296 | Zhiling Zheng, Oufan Zhang, Christian Borgs, Jennifer T. Chayes, Omar M. Yaghi | cs.IR, cond-mat.mtrl-sci, cs.CL, physics.chem-ph | Published on Journal of the American Chemical Society (2023); 102
pages (18-page manuscript, 84 pages of supporting information) | J. Am. Chem. Soc. 2023, 145, 32, 18048-18062 | cs.IR | 20230620 | 20230720 | [] |
2306.11489 | 2 | Index TermsâLarge language model, Knowledge graph, Chat- GPT, Knowledge reasoning, Knowledge management.
# I. INTRODUCTION
and high-speed computing has led to the emergence of pre- trained language models (PLMs). Plenty of PLMs, such as BERT [4], GPT [5], and T5 [6], have been proposed, which greatly improve the performance of various natural language processing (NLP) tasks. Recently, researchers have found that scaling model size or data size can improve model capacities on downstream tasks. Moreover, they found that when the parameter size exceeds a certain scale [7], these PLMs exhibit some surprising emergent abilities. Emergent abilities refer to the abilities that are not present in small models but arise in large models [7], which are utilized to distinguish large language models (LLMs) from PLMs.
On November 30, 2022, a chatbot program named ChatGPT was released by OpenAI, which is developed based on the | 2306.11489#2 | Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling | Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research. | http://arxiv.org/pdf/2306.11489 | Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, Xindong Wu | cs.CL, cs.AI | null | null | cs.CL | 20230620 | 20240130 | [
{
"id": "2010.11967"
},
{
"id": "2302.13971"
},
{
"id": "2206.14268"
},
{
"id": "1707.06347"
},
{
"id": "2204.06031"
},
{
"id": "2204.02311"
},
{
"id": "2111.08546"
},
{
"id": "1802.05365"
},
{
"id": "2107.02137"
},
{
"id": "2304.03439"
},
{
"id": "2201.11903"
},
{
"id": "2202.08005"
},
{
"id": "2207.14251"
},
{
"id": "2205.01068"
},
{
"id": "2206.07682"
},
{
"id": "1908.06725"
},
{
"id": "2007.00655"
},
{
"id": "1909.11942"
},
{
"id": "2110.08455"
},
{
"id": "2302.00083"
},
{
"id": "2303.03378"
},
{
"id": "1912.13415"
},
{
"id": "2307.09288"
},
{
"id": "2210.11416"
},
{
"id": "2301.08913"
},
{
"id": "2303.08774"
},
{
"id": "2212.13428"
},
{
"id": "2201.08860"
},
{
"id": "2303.16537"
},
{
"id": "2305.13269"
},
{
"id": "2307.07697"
},
{
"id": "2203.12258"
},
{
"id": "1910.01108"
},
{
"id": "2304.08354"
},
{
"id": "2303.11504"
},
{
"id": "2303.18223"
},
{
"id": "2301.00234"
},
{
"id": "2211.08411"
},
{
"id": "2302.04023"
},
{
"id": "2201.08239"
},
{
"id": "2210.02414"
},
{
"id": "1907.11692"
},
{
"id": "2303.16421"
},
{
"id": "2102.00894"
},
{
"id": "2202.00964"
},
{
"id": "2303.12712"
},
{
"id": "2210.01240"
},
{
"id": "2308.15452"
},
{
"id": "1912.09637"
},
{
"id": "2109.01652"
}
] |
2306.11507 | 2 | # 1 Introduction
The rapid progress in natural language processing (NLP) technology has propelled the advancement of large language models (LLMs), which have gained considerable attention due to their exceptional performance in various tasks. This trend has been further accelerated by the emergence of ChatGPT [1], stimulating the development of other similar models like ChatGPT/GPT-4 [2], LLaMa [3], Alpaca [4], and Vicuna [5]. However, alongside these advancements of LLMs, there is a growing awareness of the potential negative impacts on society. For example, recent studies [6â8] have demonstrated that LLMs can be exploited to generate harmful content. As a result, there is an increasing focus on the ethical considerations associated with LLMs. Prior research has extensively investigated the safety concerns related to language models, including issues of toxicity [9â14], bias [15â22], and more.
Although previous studies have evaluated ethical aspects related to LLMs [23, 24], these evaluations often concentrate on speciï¬c aspects, such as traditional pre-trained models (e.g., Bert [25]) with only bias or toxicity aspect, lacking depth and comprehensiveness. This limitation hinders researchers from gaining a comprehensive understanding of the potential ethical harms posed by the LLMs. To
âCorresponding author
Preprint. Under review. | 2306.11507#2 | TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models | Large Language Models (LLMs) such as ChatGPT, have gained significant
attention due to their impressive natural language processing capabilities. It
is crucial to prioritize human-centered principles when utilizing these models.
Safeguarding the ethical and moral compliance of LLMs is of utmost importance.
However, individual ethical issues have not been well studied on the latest
LLMs. Therefore, this study aims to address these gaps by introducing a new
benchmark -- TrustGPT. TrustGPT provides a comprehensive evaluation of LLMs in
three crucial areas: toxicity, bias, and value-alignment. Initially, TrustGPT
examines toxicity in language models by employing toxic prompt templates
derived from social norms. It then quantifies the extent of bias in models by
measuring quantifiable toxicity values across different groups. Lastly,
TrustGPT assesses the value of conversation generation models from both active
value-alignment and passive value-alignment tasks. Through the implementation
of TrustGPT, this research aims to enhance our understanding of the performance
of conversation generation models and promote the development of language
models that are more ethical and socially responsible. | http://arxiv.org/pdf/2306.11507 | Yue Huang, Qihui Zhang, Philip S. Y, Lichao Sun | cs.CL, cs.AI | We are currently expanding this work and welcome collaborators! | null | cs.CL | 20230620 | 20230620 | [
{
"id": "2305.12434"
},
{
"id": "2004.09456"
},
{
"id": "2109.07445"
},
{
"id": "2010.06032"
},
{
"id": "1810.04805"
},
{
"id": "2305.10425"
},
{
"id": "2010.00133"
},
{
"id": "2305.03047"
},
{
"id": "2201.11903"
},
{
"id": "2010.02428"
},
{
"id": "2305.10601"
},
{
"id": "2112.07447"
},
{
"id": "2302.05733"
},
{
"id": "2304.05335"
},
{
"id": "2304.05197"
},
{
"id": "2301.12867"
},
{
"id": "2211.09110"
},
{
"id": "2302.12173"
},
{
"id": "2212.08073"
},
{
"id": "1903.10561"
},
{
"id": "2009.11462"
},
{
"id": "2206.04615"
},
{
"id": "1904.03035"
},
{
"id": "2112.00861"
},
{
"id": "2212.08061"
},
{
"id": "2203.12574"
},
{
"id": "2305.14450"
},
{
"id": "1906.07337"
},
{
"id": "2210.07652"
},
{
"id": "2210.04492"
},
{
"id": "1911.03891"
},
{
"id": "2011.00620"
},
{
"id": "2110.08193"
},
{
"id": "2203.09509"
},
{
"id": "2205.12390"
}
] |
2306.11644 | 2 | The art of training large artificial neural networks has made extraordinary progress in the last decade, especially after the discovery of the Transformer architecture [VSP+17], yet the science behind this success remains limited. Amidst a vast and confusing array of results, a semblance of order emerged around the same time as Transformers were introduced, namely that performance improves somewhat predictably as one scales up either the amount of compute or the size of the network [HNA+17], a phenomenon which is now referred to as scaling laws [KMH+20]. The subsequent exploration of scale in deep learning was guided by these scaling laws [BMR+20], and discoveries of variants of these laws led to rapid jump in performances [HBM+22]. In this work, following the footsteps of Eldan and Li [EL23], we explore the improvement that can be obtained along a different axis: the quality of the data. It has long been known that higher quality data leads to better results, e.g., data cleaning is an important part of modern dataset creation [RSR+20], and it can yield other side benefits such as somewhat smaller datasets [LYR+23, YGK+23] or allowing for more | 2306.11644#2 | Textbooks Are All You Need | We introduce phi-1, a new large language model for code, with significantly
smaller size than competing models: phi-1 is a Transformer-based model with
1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook
quality" data from the web (6B tokens) and synthetically generated textbooks
and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays
surprising emergent properties compared to phi-1-base, our model before our
finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller
model with 350M parameters trained with the same pipeline as phi-1 that still
achieves 45% on HumanEval. | http://arxiv.org/pdf/2306.11644 | Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li | cs.CL, cs.AI, cs.LG | 26 pages; changed color scheme of plot. fixed minor typos and added
couple clarifications | null | cs.CL | 20230620 | 20231002 | [
{
"id": "2204.02311"
},
{
"id": "2207.14255"
},
{
"id": "2305.10403"
},
{
"id": "2305.16264"
},
{
"id": "2305.07759"
},
{
"id": "2305.07922"
},
{
"id": "2107.03374"
},
{
"id": "2305.01210"
},
{
"id": "2305.17493"
},
{
"id": "2108.07732"
},
{
"id": "2305.13673"
},
{
"id": "2303.08774"
},
{
"id": "2305.13865"
},
{
"id": "2305.15560"
},
{
"id": "2305.15717"
},
{
"id": "2306.02707"
},
{
"id": "2305.06161"
},
{
"id": "2305.14387"
},
{
"id": "2104.09864"
},
{
"id": "2212.10560"
},
{
"id": "2001.08361"
},
{
"id": "2305.16635"
},
{
"id": "2305.13169"
},
{
"id": "2303.12712"
},
{
"id": "1712.00409"
},
{
"id": "2301.03988"
},
{
"id": "2211.15533"
},
{
"id": "2305.02309"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.