doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2308.01320
2
# 1 Overview ChatGPT [1] like models have taken the AI world by storm, and it would not be an overstate- ment to say that it’s impact on the digital world has been revolutionary. These models are incredibly versatile, capable of performing tasks like summarization, coding, and translation with results that are on-par or even exceeding the capabilities of human experts. Given the sheer power of these models, multiple efforts are underway in the AI open-source community to make ChatGPT-style models more accessible (e.g. ChatLLaMa [2], Alpaca [3], Vicuna [4], Databricks-Dolly [5], etc.). Despite these incredible efforts, there is still a lack of an end-to-end RLHF pipeline capable of training powerful ChatGPT like model that is easily accessible to the AI community. For instance, training a modest 6.7B ChatGPT model with existing systems typically requires expensive multi-GPU setup that is beyond the reach of many data scientists. Even with access to such computing resources, training efficiency is often less than 5% of what these machines are capable of (as illustrated in Figure 6). And finally, existing solutions simply cannot support 1 Table 1: Single-Node 8x A100: Training Time and Corresponding Approximate Cost on Azure. OPT-13B
2308.01320#2
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
2
However, assuming a single image as input is limiting: autoregressive vision-language models enable new capabilities by instead mapping an arbitrarily interleaved sequence of images and 1University of Washington 2Stanford University 3Allen Institute for AI 4LAION 5University of California Santa Barbara 6Hebrew Univer- sity 7Columbia University 8Google DeepMind 9Juelich Su- percomputing Center, Research Center Juelich. Correspon- dence to <[email protected], [email protected], [email protected]>. text to textual outputs. This interface provides important flexibility: the input sequence can in- clude demonstrations for a new task, enabling few- shot, in-context learning [3] or multi-round multi- modal chatbot interactions. Evaluations suggest that autoregressive vision-language models can be performant foundation models [5]: models like Flamingo [3], CM3 [1], Kosmos-1 [12], PALM- E [8], and multimodal GPT-4 [28] generalize well across diverse vision-language tasks.
2308.01390#2
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
3
1 Table 1: Single-Node 8x A100: Training Time and Corresponding Approximate Cost on Azure. OPT-13B GPUs OPT-6.7B OPT-30B OPT-66B 8x A100-40GB 8x A100-80GB 4.1 hours ($132) 5.7 hours 10.8 hours 9 hours ($290) 1.85 days 18 hours ($580) NA 2.1 days ($1620) easy, fast and affordable training state-of-the-art ChatGPT models with hundreds of billions of parameters, even given access to multi-GPU clusters. These limitations stem from a lack of a robust system design that is capable of effectively supporting the complex InstructGPT’s RLHF training pipeline that is quite different from the standard pre-training and fine-tuning pipelines that existing DL systems are designed for. Therefore, in the spirit of democratizing ChatGPT-like models, and making RLHF training truly accessible to the AI community, today we are releasing DeepSpeed-Chat with the following three capabilities:
2308.01320#3
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
3
@ Input Prompt “ Completion > Output: Two Output: A cats are tacoon Output: An apple with the sleeping next wearing a . word "iPod" written on it. to each other spacesuit. ona sofa. (Ae Output: . Output: . | UNDERGROUNL ° OUIPU “Red Brick Pizza” “Underground” “Pike Pl” Question: Question: question») ( ») What latte art Which video p hi A What car is is presented in game is the image? represented in featured in the Tesla Model 3. Answer: A the image? image? swan. Answer: lanewer’ Among Us. /) } Figure 2: OpenFlamingo-9B (pictured) can process interleaved image-and-text sequences. This interface allows OpenFlamingo to learn many vision-language tasks through in-context demonstrations. two open source datasets: LAION-2B [32] and Multimodal C4 [45]. Our stack is built using publicly available components, including CLIP as a vision encoder [30] and open-source language models as decoders [27, 35].
2308.01390#3
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
4
• Easy-to-use Training and Inference Experience for ChatGPT Like Models: A single script capable of taking a pre-trained Huggingface [6] model, running it through all three steps of InstructGPT [7] training using DeepSpeed-RLHF system and producing your very own ChatGPT like model. In addition, we provide an inference API for testing conversation-style interactions after the model is trained. • DeepSpeed-RLHF Pipeline: DeepSpeed-RLHF pipeline primarily replicates the train- ing pipeline from the InstructGPT [7] paper with careful attention to ensure completeness and one-to-one correspondence with the three-steps that includes a) Supervised Fine- tuning (SFT), b) Reward Model Fine-tuning and c) Reinforcement Learning with Human Feedback (RLHF) [8]. Additionally, we offer data abstraction and blending capabilities to enable training with multiple data sources.
2308.01320#4
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
4
Unfortunately, autoregressive vision- language models are closed-source, and their weights, training data, code, and hyperparam- eters are proprietary. This limits the academic community’s ability to conduct research on au- toregressive vision-language models, e.g., to un- derstand how web-scraped image-text data affects models’ performance and safety. Open-source al- ternatives, such as LLaVA [25], LLaMA-Adapter [41], BLIP-2 [23], and mPLUG-Owl [39], only take in single images, and they often directly train on curated datasets like COCO [24] rather than web data. We call the resulting family of five models OpenFlamingo. These models range from 3B to 9B parameters, with both standard and instruction-tuned [37] language model backbones. When averaging performance across 7 evalua- tion datasets, OpenFlamingo-3B and -9B mod- els attain 85% and 89% of their corresponding Flamingo models respectively (Figure 1). Models and code are open-sourced at https://github. com/mlfoundations/open_flamingo.
2308.01390#4
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
5
• DeepSpeed-RLHF System: A robust and sophisticated RLHF system that combines the training and inference prowess of DeepSpeed into single unified Hybrid Engine (DeepSpeed- HE) for RLHF. The Hybrid-Engine is capable of seamlessly transitioning between infer- ence and training modes within RLHF, allowing it to leverage various optimizations from DeepSpeed-Inference such as tensor-parallelism and high-performance transformer kernels for generation, while also benefiting from the multitude of ZeRO- and LoRA [9]-based memory optimization strategies for RL training. DeepSpeed-HE is also aware of the full RLHF pipeline, allowing it to make optimal decisions in terms of memory management and data movement across different phases of RLHF. DeepSpeed-RLHF system is capable of unparalleled efficiency at scale, making complex RLHF training fast, affordable, and easily accessible to the AI community: Efficiency and Affordability: In terms of efficiency, DeepSpeed-HE is over 15x faster than existing systems, making RLHF training both fast and affordable. For instance, DeepSpeed-HE can train an OPT-13B [10] in just 9 hours and OPT-30B in 18 hours on Azure Cloud for under $300 and $600, respectively, as shown in Table 1.
2308.01320#5
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
5
In this technical report, we document our expe- riences building an open-source reproduction of the Flamingo models [3]. Following Flamingo, we augment the layers of pretrained, frozen language models so that they cross attend to the outputs of a frozen vision encoder while predicting the next token. The cross-modal module is trained on web-scraped image-text sequences, in our case, # 2 Related work Generative vision-language models output text conditioned on an image-text sequence. While many such architectures, such as BLIP2 Table 1: Architecture details of the OpenFlamingo models. All five models use a CLIP ViT-L/14 vision encoder [30]. A cross-attention interval of 4 means that a cross-attention module is inserted every 4th language model layer. Note that OpenFlamingo models labeled (Instruct) use language models that were finetuned on language-only tasks; we have not instruction-tuned OpenFlamingo models on vision-language tasks.
2308.01390#5
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
6
Excellent Scalability: DeepSpeed-HE supports models with hundreds of billions of pa- rameters and can achieve excellent scalability on multi-node multi-GPU systems. As a result, even a 13B model can be trained in 1.25 hours and a massive 175B model can be trained with DeepSpeed-HE in under a day as shown in Table 2.1 1Very Important Details: The numbers in both tables (1, 2) above are for Step 3 of the training 2 Table 2: Multi-Node 64x A100-80GB: Training Time and Corresponding Approximate Cost on Azure. GPUs 64x A100-80G 1.25 hours ($320) OPT-13B OPT-30B 4 hours ($1024) OPT-66B 7.5 hours ($1920) OPT-175B 20 hours ($5120) Table 3: Max Model Size Supported by DeepSpeed-HE on a Single GPU. V100 32G A6000 48G A100 40G A100 80G Model Size OPT-2.7B OPT-6.7B OPT-6.7B OPT-13B Democratizing RLHF Training: With just a single GPU, DeepSpeed-HE supports train- ing models with over 13 billion parameters as shown in Table 3, enabling data scientists without access to multi-GPU systems to create not just toy RLHF models but large and powerful ones that can be used in real-world scenarios.
2308.01320#6
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
6
Model Language model Cross-attention interval <image> <|endofchunk|> OpenFlamingo-3B OpenFlamingo-3B (Instruct) MPT-1B (Instruct) [27] OpenFlamingo-4B OpenFlamingo-4B (Instruct) RedPajama-3B (Instruct) [35] OpenFlamingo-9B MPT-1B [27] RedPajama-3B [35] MPT-7B [27] 1 1 2 2 4 Trainable Trainable Frozen Frozen Trainable 2 and LLaVa, can incorporate only one image in their context [6, 16, 22, 25, 39, 41], autoregressive vision-language models accept interleaved image- text sequences, enabling in-context learning. (OBELISC instead of Multimodal C4) and lan- guage model (LLaMA-9B [41] instead of the MPT and RedPajama-3B models [27, 35]).
2308.01390#6
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
7
Next, we dive deeper into the three capabilities of DeepSpeed-Chat introduced above. # 2 Easy-to-use ChatGPT Training and Inference Experi- ence We start with the easy-to-use experience by showing how you can train OPT-13B and then OPT-66B models with DeepSpeed-RLHF system. If you are short on time, you can even train an OPT-1.3B model on a single consumer-grade GPU in just two hours. We also demonstrate how you can use our DeepSpeed-chat RLHF API to develop your own custom pipelines. 2.1 Training your first ChatGPT-Style model is so easy with DeepSpeed- Chat’s RLHF examples One single script completes all three stages of RLHF training and generates your first ChatGPT model! We use an example of pretrained OPT-13B as the actor model and OPT-350M as the reward model in the following single script to generate a final 13B ChatGPT-style model: p i p i n s t a l l deepspeed >=0.9.0
2308.01320#7
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
7
We chose to replicate Flamingo because of its strong in-context learning abilities. Aggregated across evaluation sets, Flamingo models see steady performance improvements up to 32 in- context examples [3]. This is in contrast with other autoregressive vision-language models, for example Kosmos-1 [12]; on captioning tasks COCO [24] and Flickr-30K [29], Kosmos-1 shows performance improvements up to 4 in-context ex- amples, but performance degrades when using 8 in-context examples.
2308.01390#7
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
8
p i p i n s t a l l deepspeed >=0.9.0 g i t c l o n e h t t p s : / / g i t h u b . com/ m i c r o s o f t / DeepSpeedExamples . g i t cd DeepSpeedExamples / a p p l i c a t i o n s / DeepSpeed−Chat/ p i p i n s t a l l −r and based on actual measured training throughput on DeepSpeed-RLHF curated dataset and training recipe which trains for one epoch on a total of 135M tokens. We have in total 67.5M query tokens (131.9k queries with sequence length 256) and 67.5M generated tokens (131.9k answers with sequence length 256), and a maximum global batch size per step of 0.5M tokens (1024 query-answer pairs). We urge readers to pay attention to these specifications before making any cost and e2e time comparisons with DeepSpeed-RLHF. See our benchmark settings (https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/ DeepSpeed-Chat/training/step3_rlhf_finetuning/BenckmarkSetting.md) page for more details. 3 Table 4: DeepSpeed-Chat on a single DGX node with 8 NVIDIA A100-40G GPUs.
2308.01320#8
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
8
Open-source image-text datasets. Propri- etary autoregressive vision-language models are typically trained on closed-source datasets [1, 3, 8, 12]. For example, Flamingo relies on image- text pairs from the ALIGN dataset [14] and in- terleaved image-text sequences from the M3W dataset [3]; both are unavailable to the pub- lic. Recent efforts to replicate these web-scraped datasets include LAION-2B, a dataset of image- text pairs, and Multimodal C4 [45] and OBELISC [18], datasets of image-text sequences. We use LAION-2B and Multimodal C4 for training Open- Flamingo models. Lauren¸con et al. [18] also train 9B and 80B Flamingo-style models; their models differ in the choice of pretraining dataset # 3 Approach # 3.1 Architecture
2308.01390#8
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
9
3 Table 4: DeepSpeed-Chat on a single DGX node with 8 NVIDIA A100-40G GPUs. Model Sizes Step 1 Step 2 Step 3 Total Actor: OPT-13B, Reward: OPT-350M 2.5hr 0.25hr 10.8hr 13.6hr Table 5: DeepSpeed-Chat on 8 DGX nodes with 8 NVIDIA A100-80G GPUs/node. E2E time breakdown for training a 66 billion parameter ChatGPT model via Model Sizes Step 1 Step 2 Step 3 Total Actor: OPT-66B, Reward: OPT-350M 82 mins 5 mins 7.5hr 9hr python t r a i n . py −−a c t o r −model f a c e b o o k / opt −350m −−deployment−type s i n g l e n o d e f a c e b o o k / opt −13b −−reward−model In about half a day, your 13B model would be fully trained with its checkpoints ready. Table 4 demonstrates a breakdown of the training time for each of the three steps. Test your final model with DeepSpeed-Chat’s Inference API Now it is time to plug- in and test your final model trained by DeepSpeed-Chat! You can play with not only Q&A sessions but also conversation-style interactions.
2308.01320#9
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
9
# 3 Approach # 3.1 Architecture We match the Flamingo architecture [3]. Given an interleaved sequence of images with text to- kens, OpenFlamingo models predict the next text token conditioned on all previous text tokens and the last preceding image. Text tokens attend to their corresponding images via dense cross- attention modules, which we attach to the layers of a frozen, autoregressive language model. To embed images, we extract patch features from a frozen vision encoder and pass these through a trainable Perceiver resampler [13]. As a preprocessing step, we first mark the loca- tions of images in the text sequence with <image> tokens. We also insert <|endofchunk|> tokens af- ter the text tokens following an image; e.g. the sequence x Hello world, where x is an image, would be preprocessed into <image> Hello world <|endofchunk|> . Unlike Flamingo, we do not support video inputs at this time. We leave this for future work. Table 1 describes the five OpenFlamingo mod- els based on their language model and density 3
2308.01390#9
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
10
Table 1 describes the five OpenFlamingo mod- els based on their language model and density 3 (A) LAION-2B (B) Multimodal C4 Ja bs © 7/~,| Manuelina Culinary Pasta Program (C) ChatGPT-generated data Milk comes from cows. empty out. ff Eggs come from chickens. Golden Week (3—/L4"74—7 ) is one of the busiest holidays in Japan. that occur within seven days every spring. Combined with weekends, the holidays allow for almost the entire nation to take time off work and travel, making it the longest vacation period of the year for most Japanese employees. Transportation prices soar, hotels book up in advance, and whole towns’ populations seem to travel around and even outside of Japan. Showa Day is a Japanese [...] Golden Week refers to a collection of four national Japanese holidays Golden Week is the most popular time for Japanese people to Figure 3: Samples from (A) LAION-2B [32], (B) Multimodal C4 [45], and (C) ChatGPT-generated data. Table 2: Statistics for training datasets. “ChatGPT” stands for the ChatGPT-generated sequences. The median numbers of images and tokens per sequence were calculated using a random sample of 1,000 se- quences.
2308.01390#10
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
11
A s s i s t a n t : Sure , I can t r y . M i c r o s o f t i s a company t h a t makes computers , and they make a program c a l l e d ‘ ‘ Windows ’ ’ which i s I t ’ s a l l you ’ l l you ’ l l t h e o p e r a t i n g system t h a t r un s on t h e computer . I t ’ s where I t ’ s a l s o where l i k e t h e ‘ ‘ b r a i n ’ ’ o f t h e computer . t h e programs and f i l e s a r e s t o r e d . f i n d t h e s t a r t ‘ ‘ s t a r t ’ ’ button , which i s where t o u s e t h e computer . # 2.2 Want to try different model sizes and configurations? You got it!
2308.01320#11
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
11
Dataset Median images per sequence Median tokens per sequence LAION-2B 1 2 MMC4 3 ChatGPT 17 256 56 Tokens per sequence P 8 2 40 j i 2 5 220 z & 5 i 0 50 100 150 200 250 Images per sequence P 2 30 3 g & 20 g g ‘S10 * | 0 1 2 3 4 5 6 of cross-attention layers; all models use CLIP In most ViT-L/14 [30] as a vision encoder. cases, the <image> and <|endofchunk|> embed- dings are trainable, while other text embeddings are frozen. For the OpenFlamingo-4B models, all embeddings are frozen, including the randomly initialized <image> and <|endofchunk|> embed- dings. This was due to complications with gra- dient masking when using Fully Sharded Data Parallel (§3.3). # 3.2 Training data We train our models on a mixture of image-text pairs and interleaved image-text sequences. Dur- ing training, we sample dataset shards with re- placement using the WebDataset format [34]. Figure 4: Histograms of the number of text tokens and images per MMC4 sequence, based on a sample of 1,000 sequences. Sequences are long with few images.
2308.01390#11
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
12
# 2.2 Want to try different model sizes and configurations? You got it! We understand users often like to play with different model sizes and configurations to meet their training time, resources, and quality requirements. With DeepSpeed-Chat, users can easily do that. For example, if you want to train a larger and higher-quality model on your GPU cluster for your research or business, you can simply use the same script with your desired model size e.g., 66B and GPU counts e.g., 64 GPUs: python t r a i n . py −−a c t o r −model −−reward−model f a c e b o o k / opt −66b f a c e b o o k / opt −350m −−deployment−type m u l t i n o d e Within 9 hours, you can have your 66 billion parameters ChatGPT model ready to be served in your favorite front-end GUI as shown in Table 5. 4 Table 6: DeepSpeed-Chat on a single commodity NVIDIA A6000 GPU with 48GB memory. Model Sizes Step 1 Step 2 Step 3 Total Actor: OPT-1.3B, Reward: OPT-350M 2900 secs 670 secs 1.2hr 2.2hr
2308.01320#12
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
12
Figure 4: Histograms of the number of text tokens and images per MMC4 sequence, based on a sample of 1,000 sequences. Sequences are long with few images. LAION-2B [32]. When training Flamingo, Alayrac et al. [3] use ALIGN [14], a closed-source dataset of over 1B single images paired with short alt-text captions. To train OpenFlamingo, we replace ALIGN with LAION-2B, an open-source web-scraped dataset consisting of 2B image-text pairs (Figure 3A). We use part of the English sub- set and truncate captions to 32 tokens. All image- text pairs in LAION-2B have a cosine similarity of at least 0.28 according to CLIP ViT-B/32. 4
2308.01390#12
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
13
Model Sizes Step 1 Step 2 Step 3 Total Actor: OPT-1.3B, Reward: OPT-350M 2900 secs 670 secs 1.2hr 2.2hr If you only have around 1-2 hours for coffee or lunch break, you can also try to train a small/toy model with DeepSpeed-Chat. For example, we prepared a training example for a 1.3B model with a single dataset to test our framework on your consumer-grade GPUs. The best part is that you will have your model checkpoint ready to play with when you are back from your lunch break (see Table 6)! python t r a i n . py −−a c t o r −model −−reward−model f a c e b o o k / opt −1.3b # 2.3 Customizing your own RLHF training pipeline using DeepSpeed- Chat’s RLHF APIs DeepSpeed-Chat allows users to build their very own RLHF training pipeline using our flexible APIs shown below, which users can use to reconstruct their own RLHF training strategy. This enables a general interface and backend for creating a wide range of RLHF algorithms for research exploration. e n g i n e = DeepSpeedRLHFEngine (
2308.01320#13
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
13
4 In addition to image- Multimodal C4 [45]. text pairs, Alayrac et al. [3] train Flamingo using M3W, an internal web-scraped dataset of 43M in- terleaved image-text sequences. We replace M3W with Multimodal C4 (MMC4), an open-source dataset of 101M interleaved samples (Figure 3B). Unlike M3W or OBELISC [18], which directly parse HTML documents to extract multimodal sequences, MMC4 uses CLIP to soft align images with sentences in a document. To ensure data quality, we exclude images if their cosine simi- larity with the subsequent text falls below 0.24, according to CLIP ViT-L/14. Sequences contain between 1 and 6 images (median 2). To encour- age learning from sequences with multiple images, we reject single-image sequences with probability 0.5. The resulting distribution is shown in Figure 4. Additional notes on MMC4 filtering are in Appendix B.
2308.01390#13
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
14
e n g i n e = DeepSpeedRLHFEngine ( a c t o r m o d e l n a m e o r p a t h=a r g s . a c t o r m o d e l n a m e o r p a t h , c r i t i c m o d e l n a m e o r p a t h=a r g s . c r i t i c m o d e l n a m e o r p a t h , t o k e n i z e r=t o k e n i z e r , n u m t o t a l a r g s=a r g s ) t r a i n e r = DeepSpeedPPOTrainer ( e n g i n e=e n g i n e , a r g s=a r g s ) f o r prompt batch i n p r o m p t t r a i n d a t a l o a d e r : out = t r a i n e r . g e n e r a t e e x p e r i e n c e ( prompt batch ) a c t o r l o s s , c r i t i c l o s s = t r a i n e r . t r a i n r l h f ( out ) # 3 Full-fledged RLHF Training Pipeline
2308.01320#14
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
14
Synthetic data. For the OpenFlamingo-4B models, we also experimented with training on ChatGPT-generated synthetic data (Figure 3C) These 417K image-text sequences were generated by prompting ChatGPT to generate a sequence of interleaved text and image alt-texts (in place of images). The alt-texts are used to retrieve a cor- responding images from LAION-5B. Additional details of the prompting and data construction process are described in Appendix C. The median number of images per sequence is higher than in MMC4, while the median number of text tokens is lower (Table 2). We release these sequences through the OpenFlamingo repository. # 3.3 Training details OpenFlamingo models were trained for 60M inter- leaved (MMC4) examples1 and 120M LAION-2B examples. All models are trained using the next- token prediction objective and optimized with 1OpenFlamingo-4B models use both MMC4 and ChatGPT-generated data as interleaved sequences; 60M interleaved examples translates to approximately 240K ChatGPT-generated sequences and 59.8M MMC4 se- quences. Other models train on 60M MMC4 examples. 5 Table 3: Training used either DistributedDataParallel (DDP) or FullyShardedDataParallel (FSDP) [43].
2308.01390#14
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
15
# 3 Full-fledged RLHF Training Pipeline To provide a seamless training experience, we follow InstructGPT and include a full-fledged end-to-end training pipeline in DeepSpeed-Chat as shown in Figure 1. Our pipeline includes three main steps: • Step 1: Supervised finetuning (SFT), where human responses to various queries are carefully selected to finetune the pretrained language models. • Step 2: Reward model finetuning, where a separate (usually smaller than the SFT) model (RW) is trained with a dataset that has human-provided rankings of multiple answers to the same query. 5 Step1 Step 2 Pretrained Pair good/bad! Pretrained |__ + (8G Paes alae ser) Ere at rant |= RW | Human labeled data Step 3 Frozen Frozen (Optional) EMA = Reference ae Reward | aEMA+(1- @)Actor ast Actor model iaarorelel| ae Critic model | model | ve ~ ~ ~ oo S . (Optional ) Pretraining Objective for Actor Pretraining data — C= PPO... Je] DeepSpeed Figure 1: The illustration of DeepSpeed-Chat’s RLHF training pipeline with optional features.
2308.01320#15
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
15
5 Table 3: Training used either DistributedDataParallel (DDP) or FullyShardedDataParallel (FSDP) [43]. Model GPU type Sharding strategy Precision OF-3B A100-80GB DDP OF-3B (I) A100-40GB DDP OF-4B A100-40GB FSDP OF-4B (I) A100-40GB FSDP A100-80GB DDP OF-9B fp32 fp32 fp32 fp32 amp bf16 AdamW. The learning rate is linearly increased at the beginning of training, and then held constant at 1e-4 throughout training. We apply weight decay of 0.1 on the dense cross attention layers. The batch size for LAION-2B is twice the batch size of the interleaved dataset (MMC4, optionally with ChatGPT-generated sequences), and the loss weights are set to Flamingo defaults of 1 and 0.2 for MMC4 and LAION-2B respectively. We accumulate gradients over both datasets between optimizer steps. Distributed training. We train all models using 64 GPUs distributed across 8 nodes on Stabilty AI’s cluster (Table 3). OpenFlamingo-4B models were trained using model sharding with Fully Sharded Data Parallel [43]; other models were trained using only data parallel.
2308.01390#15
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
16
Figure 1: The illustration of DeepSpeed-Chat’s RLHF training pipeline with optional features. • Step 3: RLHF training, where the SFT model is further finetuned with the reward feedback from the RW model using the Proximal Policy Optimization (PPO) [11] algo- rithm. We provide two additional features in Step 3 to help improve model quality: • Exponential Moving Average (EMA) collection, where an EMA based checkpoint can be chosen for the final evaluation. • Mixture Training, which mixes the pretraining objective (i.e., the next word prediction) with the PPO objective to prevent regression performance on public benchmarks like SQuAD2.0 [12]. The two training features, EMA and Mixed Training, are often omitted by other recent efforts since they can be optional. However, according to InstructGPT, EMA checkpoints generally provide better response quality than conventional final trained model and Mixture Training can help the model retain the pre-training benchmark solving ability. As such, we provide them for users to fully get the training experience as described in InstructGPT and strike for higher model quality. In addition to being highly consistent with InstructGPT paper [7], we also provide conve- nient features to support researchers and practitioners to train their own RLHF model with multiple data resources:
2308.01320#16
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
16
Loss curves. Figure 5 tracks LAION-2B and MMC4 loss over the course of training. After an initial improvement, MMC4 loss decreases very slowly. We speculate that, since MMC4 sequences tend to include long paragraphs between images (Figure 2), most text tokens can be generated without referencing the image. Thus, the loss may be dominated by whether the frozen language model can fit unrelated paragraphs of text. # 3.4 Evaluation method We evaluate OpenFlamingo on seven vision- language datasets including captioning (COCO [7], Flickr-30K [40]), visual question answer2.8 2.7 2.6 2.5 MMC4 Loss ff 2.4 2.3 2.2 i) 10M 20M 30M 40M 50M 60M 3.8 —— OpenFlamingo-9B —— OpenFlamingo-4B (Instruct) —— OpenFlamingo-3B 3.6 3.4 3.2 3.0 LAION Loss I 2.8 2.6 2.4 2.2 0 20M 40M 60M 80M 100M 120M # Number of samples Figure 5: MMC4 and LAION-2B language modeling loss throughout training. Curves shown with Gaussian smoothing with window size 100.
2308.01390#16
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
17
In addition to being highly consistent with InstructGPT paper [7], we also provide conve- nient features to support researchers and practitioners to train their own RLHF model with multiple data resources: • Data Abstraction and Blending Capabilities: DeepSpeed-Chat is able to train the model with multiple datasets for better model quality. It is equipped with (1) an abstract dataset layer to unify the format of different datasets; and (2) data splitting/blending capabilities so that the multiple datasets are properly blended then split across the 3 training stages. To illustrate the effectiveness of our training pipeline, we demonstrate the model quality with multi-round conversation as shown in the experience section. 6 # 4 DeepSpeed Hybrid Engine – Unified Infrastructure to Power and Optimize RLHF Training
2308.01320#17
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
17
# Number of samples Figure 5: MMC4 and LAION-2B language modeling loss throughout training. Curves shown with Gaussian smoothing with window size 100. ing (VQAv2 [2], OK-VQA [26], TextVQA [33], VizWiz [11]), and rank classification (Hateful- Memes [15]). For each dataset, we measure per- formance at 0, 4, 8, 16, and 32 in-context exam- ples. Evaluation was done in automatic mixed precision, with linear layers computed in bfloat16. Selecting in-context examples. For each evaluation example, we sample in-context exam- ples from the training split uniformly at random. Additionally, in Appendix A.2, we include eval- uations of OpenFlamingo using Retrieval-based In-Context Example Selection (RICES) [38]. Evaluation subsets. We evaluate on the dataset splits used by Alayrac et al. [3]. We run each evaluation across three seeds, where the 6 randomness is over selected in-context demon- strations, and average the results to obtain our final scores.
2308.01390#17
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
18
6 # 4 DeepSpeed Hybrid Engine – Unified Infrastructure to Power and Optimize RLHF Training Step 1 and Step 2 of the instruct-guided RLHF pipeline resemble regular fine-tuning of large models, and they are powered by ZeRO-based optimizations and flexible combination of par- allelism strategies in DeepSpeed training to achieve scale and speed. Step 3 of the pipeline, on the other hand, is the most complex part to handle in terms of performance implications. Each iteration requires efficient processing of two phases a) inference phase for token/experience generation, producing inputs for the training and b) training phase to update the weights of actor and reward models, as well as the interaction and scheduling between them. It introduces two major costs: (1) the memory cost, as several copies of the SFT and RW models need to be served throughout stage 3; and (2) the predominant generation phase, which if not acceler- ated properly, will significantly slow down the entire stage 3. Additionally, the two important features we added in Stage 3, including Exponential Moving Average (EMA) collection and Mixture Training, will incur additional memory and training costs.
2308.01320#18
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
18
6 randomness is over selected in-context demon- strations, and average the results to obtain our final scores. Prompts. For captioning tasks, we format demonstrations as <image> Output: [caption], re- placing [caption] with the ground-truth caption. For VQA, we format examples as <image> Ques- tion: [question] Short answer: [answer]. For HatefulMemes, we prompt the model with <image> is an image with: ‘[text]’ written on it. Is it hateful? Answer: [answer]. Following Alayrac et al. [3], we prompt the model with two in-context examples during zero-shot evaluations, removing their images, and for classi- fication tasks, we implement prompt ensembling by averaging logits across 6 permutations of the in-context examples. Decoding parameters. We evaluate caption- ing and VQA using beam search with 3 beams, stopping generation at 20 tokens for captioning, 5 tokens for VQA, or whenever the model produces an <|endofchunk|> token. For HatefulMemes, we compute the log-likelihood of completions “yes” and “no” and answer with the most likely com- pletion.
2308.01390#18
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
19
To tackle these challenges, we composed the full system capability of DeepSpeed Train- ing and Inference into a unified infrastructure that we call Hybrid Engine. It leverages the original DeepSpeed engines for fast training mode while effortlessly applying DeepSpeed in- ference engine for generation/evaluation mode, providing a significantly faster training system for RLHF training at Stage 3. As Figure 2 shows, the transition between DeepSpeed training and inference engine is seamless: by having the typical eval and train modes enabled for the actor model, when running for inference and training pipeline, DeepSpeed selects its different optimizations to run the model faster and improve the overall system throughput. During its inference execution for experience generation phase of RLHF training, Deep- Speed Hybrid Engine uses a light-weight memory management system to handle the KV-cache and intermediate results, together with highly optimized inference-adapted kernels and ten- sor parallelism implementation, to achieve significant boost in throughput (tokens-per-second) compared to the existing solutions. During the training execution, Hybrid Engine enables memory optimization techniques such as DeepSpeed’s ZeRO family of technologies and Low Rank Adaption (LoRA). We designed and implemented these system optimizations in a way that they are compatible with each other and can be composed together to deliver the highest training efficiency under the unified Hybrid Engine.
2308.01320#19
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
19
Metrics. For captioning, we use CIDEr score [36]. For VQA, we report VQA accuracy, i.e., exact match accuracy over a set of ground truth answers [2]. For HatefulMemes, we com- pute AUC ROC. # 4 Results In Table 4, we compare OpenFlamingo and Flamingo models across 0, 4, and 32 in-context examples. On average, OpenFlamingo-3B, -3B (Instruct), -4B (Instruct), and -9B attain more than 86% of the performance of their correspond- ing Flamingo models (Figure 1). In the 0- and 4-shot regimes, OpenFlamingo mod- els approach or match Flamingo performances on Evaluations with random demonstrations
2308.01390#19
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
20
Hybrid Engine can seamlessly change model partitioning across training and inference to support tensor-parallelism based inferencing and ZeRO-based sharding mechanism for training. It can also reconfigure the memory system to maximize memory availability during each of these modes. This allows for improved performance by avoiding memory allocation bottlenecks and supporting large batch sizes. Packed with a spectrum of system technologies from DeepSpeed training and inference, Hybrid Engine pushes the boundary of modern RLHF training and delivers unparalleled scale and system efficiency for RLHF workloads. # 5 DeepSpeed RLHF: Unparalleled Scale and Efficiency via Hybrid Engine # 5.1 Capability Recap As discussed, DeepSpeed-HE is an amalgamation of powerful system technologies for inference and training, architected to achieve excellent scale and efficiency for DeepSpeed-RLHF pipeline 7 Actor Model train() eval() HybridEngine ie] . Training = |e Data Remapping \e-s! Inference Engine \ Switch Parallelism } Engine \.e Memory management / Training = Experience Loop Generation enzeRO echnolocy © Efficient KV-caching © Offloadin @ Tensor parallelism © LoRA e © Optimized memory usage © Efficient optimizer Secucom ed kespels 2 @ Higher inference throughput Figure 2: DeepSpeed Hybrid Engine design for accelerating the most time-consuming portion of a RLHF pipeline.
2308.01320#20
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
20
In the 0- and 4-shot regimes, OpenFlamingo mod- els approach or match Flamingo performances on Evaluations with random demonstrations COCO Flickr30K HatefulMemes OK-VQA 65 + + 3 the, 50 70 o 100 9 60 & Obes {_---.---4 uy uy 65 2 faa ty ty Q 90 a < 55 8 40 aoa =| = 60 oO << oO oO 2 < 50 80 55 ¥ 30 50 45 048 16 32 048 16 32 VQAv2 (Average) g tp }----4 > 8 60 + g 30 8 yl eeteeeneee + 240 5 — T 3 3° Art ——— B55 / Le 9 25 GS 8 z 4 <x <t50 / <x 30 50 “7 gx 3 3 : o 45 Sas S45 > 00 & 048 16 32 048 16 32 048 16 32 048 16 32 Number of in-context examples --- Flamingo-3B Flamingo-9B —— OF-3B —— OF-3B (I) — OF4B —— OF-4B (I) — OF-9B
2308.01390#20
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
21
Figure 2: DeepSpeed Hybrid Engine design for accelerating the most time-consuming portion of a RLHF pipeline. across a wide range of hardware, making RLHF training fast, affordable, and easily accessible to AI community. In terms of efficiency and affordability, as shown in Table 1, DeepSpeed-HE can train OPT- 13B in just 9 hours and OPT-30B in 18 hours on Azure Cloud for under $300 and $600, respectively. In terms of speed and scalability, as shown in Table 2, even a 13B model can be trained in 1.25 hours and a massive 175B model can be trained in under a day using a 64 GPU cluster. And in terms of accessibility and democratization of RLHF, DeepSpeed-HE supports training models with over 13 billion parameters on a single GPU as shown in Table 3. # 5.2 Throughput and Model Size Scalability Comparisons with Exist- ing RLHF Systems Compared to other RLHF systems like Colossal-AI [13] or HuggingFace [6] powered by native PyTorch [14], DeepSpeed-RLHF excels in system performance and model scalability:
2308.01320#21
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
21
Figure 6: Evaluation results per dataset across 0, 4, 8, 16, and 32 in-context examples. Each point is the average across three evaluation runs, where the randomness is over choice of in-context demonstrations. Error bars are standard deviations over random seeds. Results are reported in tabular form in Table 11. several datasets. For example, OpenFlamingo- 9B improves upon Flamingo-9B’s 0-shot perfor- mance on VQAv2 (51.8% → 52.7% VQA ac- curacy) and COCO (79.4 → 79.5 CIDEr), and OpenFlamingo-9B approaches Flamingo-9B’s 0- shot performance on Flickr-30K and VizWiz. Moreover, OpenFlamingo-9B approaches the 4- shot performance of Flamingo-9B on COCO, VQAv2, and VizWiz.
2308.01390#21
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
22
• With respect to throughput, DeepSpeed enables over 10x improvement for RLHF training on a single GPU (Figure 3). On multi-GPU setup, it enables 6 – 19x speedup over Colossal-AI and 1.4 – 10.5x over HuggingFace DDP (Figure 4). • With respect to model scalability, Colossal-AI can run a max model size of 1.3B on a single GPU and 6.7B on a single A100 40G node, DeepSpeed-HE can run 6.5B and 50B models respectively on the same hardware, up to 7.5x larger. Therefore, with over an order of magnitude higher throughput, DeepSpeed-HE unlocks the ability to train significantly larger actor models under the same latency budget or train models 8 50.6x OPT-1.3B-Single-A100-40G GPU 95 OPT-6.7B-Single-A100-40G GPU Ee & & g & = —p Ds-chat CAL-Coati HE-DDP Ds-chat CAL-Coati HF-DDP x x End-to-end Throughput (Seqs/Sec) ree re 2 End-to-end Throughput (Seqs/Sec)
2308.01320#22
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
22
However, on OK-VQA and TextVQA, Open- Flamingo models are notably weaker than their Flamingo counterparts: OpenFlamingo-9B un- derperforms Flamingo-9B in 0-shot evaluations by 6.9 percentage points on OK-VQA and 7.8 per- centage points on TextVQA. OpenFlamingo-3B also underperforms Flamingo-3B by 4.6 percent- age points in 0-shot VQAv2 accuracy. The reason for generally low VQA performance is unclear, although discussions in §5.2 may be related.
2308.01390#22
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
23
Figure 3: Step 3 throughput comparison against two other system frameworks for accelerating RLHF training on a single NVIDIA A100-40G commodity GPU. No icons represent OOM scenarios. of similar size at over 10x lower cost, compared to existing RLHF systems like Colossal-AI or HuggingFace DDP. This improvement in efficiency stems from DeepSpeed-HE’s ability to accelerate RLHF gen- eration phase of the RLHF processing leveraging DeepSpeed inference optimizations. Figure 5 shows the time breakdown for a 1.3B parameter model at an RLHF training iteration: major- ity of the time goes to the generation phase. By leveraging high performance inference kernels from DeepSpeed, DeepSpeed-HE can achieve up to 9x throughput improvement during this phase over HuggingFace and 15x over Colossal-AI allowing it to achieve unparallel end-to-end efficiency. # 5.3 Effective Throughput and Scalability Analysis
2308.01320#23
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
23
of the number of in-context examples. We ob- serve that the OpenFlamingo-3B and -9B models generally improve with the number of in-context examples. However, the rate of improvement is lower than the Flamingo models: in the bot- tom right corner of Figure 6, we observe that gaps between OpenFlamingo-9B and Flamingo- 9B widen with the number of in-context examples. We speculate that this behavior may stem from the quality of our pre-training data, which mostly consists of sequences with few images (Table 2). In contrast with the -3B and -9B models, which generally improve with more in-context examples, the OpenFlamingo-4B models unexpectedly de- grade in performance after 4 or 8 shots. The 4B models use RedPajama language models [35] instead of MPT backbones [27]; they also use frozen <image> and <|endofchunk|> embeddings. We investigate the effect of the latter in §5.1. Extrapolating to more in-context examples. In Figure 6, we plot performance as a function Trends by model size. OpenFlamingo-9B generally outperforms smaller models, except on 7
2308.01390#23
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
24
# 5.3 Effective Throughput and Scalability Analysis (I) Effective Throughput Analysis. The effective throughput of DeepSpeed-HE dur- ing Stage 3 of the RLHF training depends on the throughput that it achieves during the generation and RL training phases. In our RLHF pipeline, the generation phase comprises approximately 20% of the total computation while the RL training phase comprises of re- maining 80% (see benchmark settings https://github.com/microsoft/DeepSpeedExamples/ tree/master/applications/DeepSpeed-Chat/training/step3_rlhf_finetuning/BenckmarkSetting. md page for details). However, despite having a small proportion, the former can take a large portion of the e2e time as it requires running the actor model once for each of the 256 generated tokens with initial prompt of 256 tokens, making it memory bandwidth bound and difficult to achieve high throughput. In contrast, the RL training phase is compute bound running the reference actor model with just a couple of forward and backward passes with full 512 tokens from both prompt and generation per sample and can achieve good throughput.
2308.01320#24
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
24
Benchmark COCO [7] Flickr-30K [40] VQAv2 [2] OK-VQA [26] TextVQA [33] VizWiz [11] Shots Fl-3B Fl-9B 73.0 85.0 99.0 60.6 72.0 71.2 49.2 53.2 57.1 41.2 43.3 45.9 30.1 32.7 30.6 28.9 34.0 45.5 53.7 53.6 56.3 0 4 32 0 4 32 0 4 32 0 4 32 0 4 32 0 4 32 0 4 32 79.4 93.1 106.3 61.5 72.6 72.8 51.8 56.3 60.4 44.7 49.3 51.0 31.8 33.6 32.6 28.8 34.9 44.0 57.0 62.7 63.5 OF-3B OF-3B (I) OF-4B OF-4B (I) 81.2 (0.3) 74.9 (0.2) 85.8 (0.5) 77.3 (0.3) 99.2 (0.3) 93.0 (0.6) 55.6 (1.3) 52.3 (1.0) 61.2 (0.5) 57.2
2308.01390#24
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
25
To maximize the effective throughput, DeepSpeed-HE optimizes both phases. First, it uses the largest batch size possible to get higher efficiency on both phases. Second, during the generation phase, it leverages high-performance transformer kernels to maximize GPU memory bandwidth utilization when the model fits in single GPU memory, and leverage tensor- parallelism (TP) when it does not. Using TP in the generation phase instead of ZeRO to fit the model reduces the inter-GPU communication and maintains high GPU memory bandwidth utilization. Figure 6 shows the best achievable effective throughput for DeepSpeed-HE in terms of TFlops/GPU for model sizes ranging from 1.3B to 175B. It also shows the throughput achieved 9
2308.01320#25
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
25
(0.3) 93.0 (0.6) 55.6 (1.3) 52.3 (1.0) 61.2 (0.5) 57.2 (0.4) 53.0 (0.5) 61.1 (1.3) 46.9 (0.0) 44.6 (0.0) 49.0 (0.0) 45.8 (0.0) 47.3 (0.0) 47.0 (0.1) 31.7 (0.1) 28.2 (0.2) 34.6 (0.0) 30.3 (0.5) 34.7 (0.3) 31.0 (0.1) 21.1 (0.4) 24.2 (0.2) 27.2 (0.3) 27.0 (0.3) 23.2 (0.2) 28.3 (0.2) 21.5 (0.2) 23.7 (0.5) 26.5 (0.4) 27.0 (0.3) 31.3 (0.2) 39.8 (0.1) 53.1 (2.2) 51.2 (2.5) 54.9 (1.1) 50.6 (0.8) 54.9 (1.1) 50.2
2308.01390#25
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
26
9 e2e-RLHF-OPT-125M >” 3 80 43x Et } * “0 g* a $ Z wo Eo ‘DS-Chat CAl-Coati ‘HF-DDP ts an e2e-RLHF-OPT-6.7B z 2 is Pe = i. —— 3» 5 22 2 PF aw ° ‘DS-Chat ‘CAL-Coati ‘HF-DDP” End-to-end Throughput (Seqs/Sec) End-to-end Throughput (Seqs/Sec) cr Nw ew ow e2e-RLHF-OPT-1.3B x 1K Ds-chat CAL-Coati HF-DDP e2e-RLHF-OPT-13B zy Dz Ds-chat CAL-Coati HF-DDP Figure 4: End-to-end training throughput comparison for step 3 of the training pipeline (the most time consuming portion) with different model sizes on a single DGX node equipped with 8 NVIDIA A100-40G GPUs. No icons represent OOM scenarios.
2308.01320#26
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
26
(2.2) 51.2 (2.5) 54.9 (1.1) 50.6 (0.8) 54.9 (1.1) 50.2 (1.8) 74.4 (0.6) 82.7 (0.7) 94.8 (0.3) 51.2 (0.2) 59.1 (0.3) 64.5 (1.3) 44.1 (0.1) 45.7 (0.1) 44.8 (0.1) 28.7 (0.1) 30.6 (0.2) 30.6 (0.1) 23.1 (0.2) 28.1 (0.4) 28.5 (0.1) 23.4 (0.3) 27.7 (0.1) 39.3 (0.4) 50.1 (2.2) 49.5 (0.6) 47.8 (2.2) 76.7 (0.2) 81.8 (0.4) 95.1 (0.3) 53.6 (0.9) 60.7 (1.2) 56.9 (0.7) 45.1 (0.1) 49.0 (0.0) 43.0 (0.2) 30.7 (0.1) 35.1
2308.01390#26
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
27
Time/Seq Breakdown for Training OPT-1.3B in Step 3 on a DGX Node with 8*A100-40G GPUs HF-DDP oscnat os 1 L5 Time/Seq (sec) co mGeneration mRLTraining Others 2 25 3 3.5 Figure 5: Superior generation phase acceleration from DeepSpeed Chat’s Hybrid Engine: A time/sequence breakdown for training OPT-1.3B actor model + OPT-350M reward model on a single DGX node with 8 A100-40G GPUs. 10 120 100 0 oS Throughput Per GPU (TFLOPs) & ty nv So RLHF Throughput Breakdown 110.4 100.7 103.6 97.3 82.0 no 744 67.1 67.8 62.9 52.8 48.2 43.4 32.9 33.0 293 238 | I 25.4 OPT-1.3B OPT-6.7B OPT-13B OPT-30B OPT-66B Bloom-175B (8 GPUs) (8 GPUs) (8 GPUs) (32 GPUs) (16 GPUs) (64 GPUs) § Generation Throughput Training Throughput WW Effective Throughput Figure 6: RLHF Generation, training, and effective throughput with DeepSpeed-HE for differ- ent model sizes, at the GPU count that maximizes efficiency.
2308.01320#27
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
27
(0.7) 45.1 (0.1) 49.0 (0.0) 43.0 (0.2) 30.7 (0.1) 35.1 (0.0) 26.4 (0.2) 21.0 (0.3) 25.9 (0.0) 14.1 (0.2) 18.8 (0.1) 26.6 (0.5) 23.1 (1.1) 52.3 (2.3) 51.5 (1.4) 52.2 (1.2) OF-9B 79.5 (0.2) 89.0 (0.3) 99.5 (0.1) 59.5 (1.0) 65.8 (0.6) 61.3 (0.7) 52.7 (0.2) 54.8 (0.0) 53.3 (0.1) 37.8 (0.2) 40.1 (0.1) 42.4 (0.0) 24.2 (0.5) 28.2 (0.4) 23.8 (0.2) 27.5 (0.2) 34.1 (0.7) 44.0 (0.5) 51.6 (1.8) 54.0 (2.0) 53.8 (2.1)
2308.01390#27
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
28
Figure 6: RLHF Generation, training, and effective throughput with DeepSpeed-HE for differ- ent model sizes, at the GPU count that maximizes efficiency. by each of the generation and training phases. DeepSpeed-HE is the most efficient for models in the range 6.7B-66B. Going beyond this range to 175B, the throughput drops due to the limited memory to support larger batch sizes, while still achieving 1.2x better efficiency than the small 1.3B model. The per-GPU throughput of these gigantic models could improve further when we scale them to more GPUs with more memory available for larger batch sizes. Furthermore, we would like to point out that our effective performance is 19x higher than existing systems, as shown in Figure 4, which suggests that they are operating at lower than 5% of the peak. This demonstrates the challenge of optimizing RLHF workloads as well as the effectiveness of our system despite the challenge. (II) Scalability Analysis The best effective throughput for different model sizes is achieved at different GPU count. This is in part because some of the larger model sizes require more memory to run. However, a large part of this behavior stems from DeepSpeed-HE’s scalability properties that we discuss next.
2308.01320#28
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01320
29
Figure 7 shows that DeepSeed-RLHF has achieved good scaling overall on up to 64 GPUs. However, if we look more closely, it shows that DeepSpeed-RLHF training achieves super-linear scaling at small scale, followed by near linear or sub-linear scaling at larger scales. This is due to interaction between memory availability and max global batch size. As DeepSpeed-HE is powered by ZeRO-based technology [15] for training, it allows model states to be partitioned across the available GPUs. As a result, the memory consumption per GPU reduces with the increase in the number of GPUs, allowing DeepSpeed-HE to support a larger batch per GPU resulting in super-linear scaling. However, at large scale, while the available memory continues to increase, the maximum global batch size (1024, in our case, 11 Scalability across nodes Actor model: OPT-13b 1 Actor model: OPT-66b <of—* A100-40G v4 —e A100-80G ra v ---- linear scale “ DO 104 ~~~ linear scale i @ a 34 ra g 405 o a iT) & a 5 30) 5 a a S S > 207 S °o o = —_ = - 104 1 2 4 Nodes (8 GPUs per 8 8 node) Nodes (8 GPUs per node)
2308.01320#29
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
29
Table 4: Evaluation results across seven vision-language datasets using 0, 4, and 32 in-context examples. “OF-3B (I)” refers to OpenFlamingo-3B (Instruct), the 3B model trained with a language-instruction-tuned backbone, while “Fl-3B” refers to Flamingo-3B. Flamingo results taken from Alayrac et al. [3]. The highest number in each row is bolded. Full results (including 8- and 16-shot performance) are in Table 11. HatefulMemes and for large numbers of in-context examples on Flickr-30K and TextVQA. However, OpenFlamingo-4B models often underperform the smaller 3B models, including on Flickr-30K, HatefulMemes, TextVQA, and VizWiz.
2308.01390#29
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
30
Figure 7: Scalability for training 13B (left) and 66B (right) actor model+350M reward model on an increasing number of DGX nodes with 8 A100-40/80G GPUs with a sequence length of 512) limits the batch size per GPU, resulting in near-linear or sub- linear scaling. As a result, for a given max global batch size, DeepSpeed-HE achieves the best throughput and cost efficiency at the boundary of super-linear and sub-linear scalability, and the exact point is mostly determined by the largest batch size that can be run per GPU as the function of available memory and global batch size. # 6 Release: Try DeepSpeed Chat Today! We are very excited to share that DeepSpeed-Chat is now open-sourced and available to the AI community. • To get started, please visit our github page for DeepSpeed-Chat: GitHub Landing Page (https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed- Chat) • We will continue to improve DeepSpeed-Chat with your feedback and support. Our roadmap (https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/ DeepSpeed-Chat/README.md#-deepspeed-chats-roadmap-) shows currently supported features as well as ones that are planned for future.
2308.01320#30
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
30
Effect of language instruction-tuning. We train two OpenFlamingo models at each of the 3B and 4B scales: one model using a base lan- guage model, and one with an instruction-tuned variant of the same language model. In the lower right corner of Figure 6, we observe that the instruction-tuned variants of MPT-1B and RedPajama-3B on average outperform the base models. The difference is starkest for RedPajama- 3B. Transfer of language instruction tuning to vision-language tasks was previously reported in Huang et al. [12], Li et al. [23]. Comparison to fine-tuned state-of-the-art. Figure 7 plots each model’s performance relammm Flamingo-9B mmm OpenFlamingo-9B VizWiz HatefulMemes 100% % | | | I COCO Flickr30K = VQAv2_— OKVQA_—TextVQA Evaluation dataset FS % of fine-tuned SoTA FS FS FS Figure 7: OpenFlamingo-9B and Flamingo-9B perfor- mance relative to fine-tuned SoTA performance.
2308.01390#30
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
31
DeepSpeed-Chat is part of the bigger DeepSpeed ecosystem comprising of a multitude of Deep Learning systems and modeling technologies. To learn more, • Please visit our website (https://www.deepspeed.ai/) for detailed blog posts, tutorials, and helpful documentation. 12 • You can also follow us on our English Twitter (https://twitter.com/MSFTDeepSpeed), Japanese Twitter (https://twitter.com/MSFTDeepSpeedJP), and Chinese Zhihu (https: //www.zhihu.com/people/deepspeed) for latest news on DeepSpeed. DeepSpeed welcomes your contributions! We encourage you to report issues, contribute PRs, and join discussions on the DeepSpeed GitHub (https://github.com/microsoft/DeepSpeed/) page. Please see our contributing guide (https://github.com/microsoft/DeepSpeed/blob/ master/CONTRIBUTING.md) for more details. We are open to collaborations with universities, research labs, companies, such as those working together on deep learning research, applying DeepSpeed to empower real-world AI models and applications, and so on. # Contributions
2308.01320#31
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
31
Figure 7: OpenFlamingo-9B and Flamingo-9B perfor- mance relative to fine-tuned SoTA performance. tive to fine-tuned state-of-the-art performance, as listed on Papers With Code on June 19, 2023. OpenFlamingo-9B averages more than 62% of fine-tuned state-of-the-art performance with 32 RICES-selected in-context examples, compared to 72% achieved by Flamingo-9B. For more de- tails on the fine-tuned SoTAs, see Appendix A.1. 8 COCO VQAv2 trainable frozen trainable frozen 0-shot 46.5 41.9 (−4.6) 17.6 5.5 (−12.1) 4-shot 58.6 54.5 (−4.1) 23.2 8.4 (−14.8) 8-shot 61.2 57.4 (−3.8) 28.7 18.8 (−9.9)
2308.01390#31
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
32
# Contributions ZY: Full engagement. RYA: Hybrid engine. OR: DeepSpeed ZeRO feature adaptation. SR: System support and blog contribution. XW: Training pipeline and bench-marking support. AAA: Software support and post-release debugging. JR: Software support and post-release debugging. MZ: Training pipeline and system support, post-release debugging. CL: Data support and post-release debugging. CH: System support. ZZ: Benchmarking. MW: Software support and post-release debugging. MS: Post-release debugging. LK: Post-release debugging. HQ: System support. MT: System support. SC: Software support. SLS: System support, blog and tutorial contribution. YH: Team lead. # Acknowledgment We thank the entire DeepSpeed team for their contributions on developing, debugging, testing, and releasing the DeepSpeed-Chat software. # References [1] OpenAI. Chatgpt. https://openai.com/blog/chatgpt, 2022. [2] ChatLLaMa Authors. Chatllama. https://github.com/juncongmoo/chatllama, 2023.
2308.01320#32
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
32
Table 5: COCO and VQAv2 validation performance when using trainable <image> and <|endofchunk|> embeddings compared to frozen, randomly initialized embeddings. The model used in this experiment is based on CLIP ViT-L/14 and OPT 125M, with cross- attention every layer, and trained on 20M interleaved samples, including ChatGPT-sequences. # 5 Discussion # 5.1 Frozen embeddings In §4, we observed that OpenFlamingo-4B mod- els underperform their 3B counterparts on most datasets. One notable way the OpenFlamingo- 4B models differ from the 3B and 9B models is that their <image> and <|endofchunk|> embed- dings are randomly initialized and frozen, rather than trained.
2308.01390#32
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
33
[2] ChatLLaMa Authors. Chatllama. https://github.com/juncongmoo/chatllama, 2023. [3] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023. [4] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.
2308.01320#33
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
33
In Table 5, we investigate the effect of this differ- ence. We train small models using OPT-125M as a language model [42] to 20M interleaved samples (one-third of full training). Freezing the <image> and <|endofchunk|> embeddings results in a drop of 4.6 CIDEr for 0-shot COCO, and 12.1% ac- curacy for 0-shot VQAv2. This suggests that frozen <image> and <|endofchunk|> embeddings may impact downstream trends. # 5.2 VQAv2 validation trends During development, we used the VQAv2 valida- tion set as a temperature check for visual question answering capabilities. In this section, we discuss trends observed during development. Training dynamics. To understand how eval- uation performance evolves over the course of training, Figure 8 plots validation performance of OpenFlamingo-9B on COCO and VQAv2 throughout training. While COCO performance steadily improves, VQAv2 progress is flatter.
2308.01390#33
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
34
[5] Databricks. Databricks-dolly. https://www.databricks.com/blog/2023/ 03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html? scid=7018Y000001Fi1CQAS&utm_medium=paid+search&utm_source=google& utm_campaign=17107065832&utm_adgroup=150868748114&utm_content= blog&utm_offer=hello-dolly-democratizing-magic-chatgpt-open- models.html&utm_ad=661606835947&utm_term=databricks%20dolly&gclid= Cj0KCQjwiIOmBhDjARIsAP6YhSV89V2agFl3zFuWiZiV1N3IVNhZWr8pGtXVxXrlkuPHlW3cXbGfiHsaAmIDEALw_ wcB, 2023. 13
2308.01320#34
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
34
COCO VQAv2 90 60 80 50 2 70 z fot Q 60 & 40 8 5 @ 50 8 uw oS 30 fy 40 < P 30 < 20 SO 20} — o-shot g | — oshot 10 10} —— 4-shot —— 4-shot ° 5K 10K 15K 20K ° 5K 10K 15K 20K Steps Steps Figure for OpenFlamingo-9B across training: while COCO CIDEr improves throughout training, VQAv2 per- formance is more stagnant. This matches trends reported by Li et al. [23]. Effect of language model. Although addi- tional training did not dramatically affect VQAv2 performance, changing language model backbones did. Table 7 illustrates this effect on the VQAv2 validation split; notably, switching from OPT- 1.3B to MPT-1B (Instruct) added nearly 10 per- centage points in 0-shot performance. We hy- pothesize that the language model has similarly large effects for other VQA tasks.
2308.01390#34
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
35
13 [6] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, An- thony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, et al. Hug- arXiv preprint gingface’s transformers: State-of-the-art natural language processing. arXiv:1910.03771, 2019. [7] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language mod- els to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022. [8] Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008–3021, 2020.
2308.01320#35
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
35
Common VQA failure modes (Table 6). OpenFlamingo models struggle with counting; on the VQAv2 validation split, OpenFlamingo-9B scores 30.5% on questions with numerical an- swers, compared to 70.6% on yes / no questions. Additionally, because VQA accuracy uses an ex- act match criterion for generations, models must answer concisely to score well; OpenFlamingo models are often too verbose. Finally, VQA ques- tions can ask about objects other than the central object in the image; models sometimes answer about the central item instead. # 5.3 Applications of OpenFlamingo Multiple models have already developed on top of OpenFlamingo. Li et al. [20] fine-tuned Open- Flamingo on MIMIC-IT [19], a multi-image/video instruction following dataset, creating Otter, a 9
2308.01390#35
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01320
36
[9] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. [10] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022. [11] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. [12] Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018. [13] Colossal AI Authors. Colossal ai. https://github.com/hpcaitech/ColossalAI, 2022.
2308.01320#36
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01320
37
[13] Colossal AI Authors. Colossal ai. https://github.com/hpcaitech/ColossalAI, 2022. [14] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural informa- tion processing systems, 32, 2019. [15] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory op- timizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1–16. IEEE, 2020. 14
2308.01320#37
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human performance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF training, making it accessible to the AI community. DeepSpeed-Chat offers three key capabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, enabling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
http://arxiv.org/pdf/2308.01320
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
cs.LG, cs.AI, cs.CL
14 pages, 7 figures
null
cs.LG
20230802
20230802
[ { "id": "1707.06347" }, { "id": "2106.09685" }, { "id": "1806.03822" }, { "id": "1910.03771" }, { "id": "2205.01068" } ]
2308.01390
37
Table 6: OpenFlamingo-9B errors from the VQAv2 validation split. Common failure modes for OpenFlamingo including counting, giving answers that are too verbose (and thus truncated), and answering about the central object in the image rather than the non-central object in the question. Language model OPT-125M OPT-1.3B MPT-1B (Instruct) MPT-7B VQAv2 validation Shots 4 23.2 27.2 43.7 49.4 0 17.6 32.8 41.9 47.4 Table 7: VQAv2 validation performance at 20M in- terleaved samples across different language models. Performance largely differs between language models. # 6 Conclusion In this technical report, we described Open- Flamingo, a family of five autoregressive vision- language models across the 3B, 4B, and 9B scales. OpenFlamingo remains an active research project, and we continue to work on training and releas- ing high-quality autoregressive vision-language models. We hope our contribution enables more researchers to train and study such models. # Acknowledgements multimodal assistant. Gong et al. [10] released Multimodal-GPT, an OpenFlamingo model in- struction fine-tuned on both vision and language instruction datasets. We hope the community continues to use OpenFlamingo models.
2308.01390#37
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
38
We would like to thank Jean-Baptiste Alayrac and An- toine Miech for their advice on reproducing Flamingo. We also thank Rohan Taori, Nicholas Schiefer, Deep Ganguli, Thomas Liao, Tatsunori Hashimoto, and Nicholas Carlini for their help with assessing the safety risks of our first release of OpenFlamingo. Thanks to Stability AI for compute resources. # 5.4 Limitations # References OpenFlamingo models carry the same risks as their foundational language models. In particular, these models train on web-scraped data, and they have not undergone safety-focused fine-tuning. Models thus may produce unexpected, inappro- priate, or inaccurate outputs. We hope to further investigate the safety properties of autoregressive vision-language models like OpenFlamingo. [1] Armen Aghajanyan, Po-Yao (Bernie) Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Na- man Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, and Luke Zettlemoyer. Cm3: A causal masked multimodal model of the internet. arXiv preprint arXiv:2201.07520, 2022.
2308.01390#38
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
39
[2] Aishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Margaret Mitchell, C. Lawrence Zitnick, Devi Parikh, and Dhruv Batra. Vqa: Visual question 10 answering. International Journal of Computer Vision, 123:4–31, 2015. [3] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35: 23716–23736, 2022. Clip retrieval: Easily compute clip embeddings and build a clip re- trieval system with them. https://github.com/ rom1504/clip-retrieval, 2022. [5] Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the oppor- tunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
2308.01390#39
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
40
[6] Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Se- bastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. Pali: A jointly-scaled mul- tilingual language-image model. arXiv preprint arXiv:2209.06794, 2022. [7] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Doll´ar, and C. Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015.
2308.01390#40
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
41
[8] Danny Driess, F. Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Ho Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Haus- man, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Peter R. Florence. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023. [9] Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, et al. Datacomp: In search of the next generation of multimodal datasets. arXiv preprint arXiv:2304.14108, 2023. 11
2308.01390#41
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
42
11 [10] Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qianmengke Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model arXiv preprint for dialogue with humans. arXiv:2305.04790, 2023. [11] Danna Gurari, Qing Li, Abigale Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P. Bigham. Vizwiz grand challenge: An- swering visual questions from blind people. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3608–3617, 2018. [12] Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, et al. Language is not all you need: Aligning perception with language models. arXiv preprint arXiv:2302.14045, 2023.
2308.01390#42
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
43
[13] Andrew Jaegle, Felix Gimeno, Andy Brock, Oriol Vinyals, Andrew Zisserman, and Joao Carreira. Perceiver: General perception with iterative at- tention. In International conference on machine learning, pages 4651–4664. PMLR, 2021. [14] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up vi- sual and vision-language representation learning with noisy text supervision. In International Con- ference on Machine Learning, pages 4904–4916. PMLR, 2021. [15] Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. The hateful memes challenge: Detecting hate speech in multi- modal memes. arXiv preprint arXiv:2005.04790, 2020. [16] Jing Yu Koh, Ruslan Salakhutdinov, and Daniel Fried. Grounding language models to images arXiv preprint for multimodal generation. arXiv:2301.13823, 2023.
2308.01390#43
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
44
[17] Gokul Karthik Kumar and Karthik Nandakumar. Hate-clipper: Multimodal hateful meme classifi- cation based on cross-modal interaction of clip features. arXiv preprint arXiv:2210.05916, 2022. [18] Hugo Lauren¸con, Lucile Saulnier, L´eo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexan- der M. Rush, Douwe Kiela, Matthieu Cord, and Victor Sanh. Obelisc: An open web-scale fil- tered dataset of interleaved image-text docu- ments. arXiv preprint arXiv:2306.16527, 2023. [19] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Fanyi Pu, Jingkang Yang, C. Li, and Ziwei Liu. Mimic-it: Multi-modal in-context instruc- tion tuning. arXiv preprint arXiv:2306.05425, 2023.
2308.01390#44
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
45
[20] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023. [21] Chenliang Li, Haiyang Xu, Junfeng Tian, Wei Wang, Ming Yan, Bin Bi, Jiabo Ye, Hehong Chen, Guohai Xu, Zheng da Cao, Ji Zhang, Songfang Huang, Feiran Huang, Jingren Zhou, and Luo Si. mplug: Effective and efficient vision-language learning by cross-modal skip-connections. arXiv preprint arXiv:2205.12005, 2022. [22] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language under- standing and generation. In International Con- ference on Machine Learning, pages 12888–12900. PMLR, 2022.
2308.01390#45
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
46
[23] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language- image pre-training with frozen image encoders arXiv preprint and large language models. arXiv:2301.12597, 2023. [24] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Pi- otr Doll´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Pro- ceedings, Part V 13, pages 740–755. Springer, 2014. [25] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023. [26] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3190–3199, 2019.
2308.01390#46
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
47
external knowledge. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3190–3199, 2019. [27] MosaicML. Introducing mpt-7b: A new standard for open-source, commercially usable llms, 2023. [28] R OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. [29] Bryan A Plummer, Liwei Wang, Chris M Cer- vantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collect- ing region-to-phrase correspondences for richer image-to-sentence models. In IEEE international conference on computer vision, pages 2641–2649, 2015. [30] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.
2308.01390#47
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
48
[31] Shibani Santurkar, Yann Dubois, Rohan Taori, Percy Liang, and Tatsunori Hashimoto. Is a caption worth a thousand images? a controlled study for representation learning. arXiv preprint arXiv:2207.07635, 2022. [32] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion- 5b: An open large-scale dataset for training next generation image-text models. arXiv preprint arXiv:2210.08402, 2022. [33] Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8309–8318, 2019. A high- performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch. Available at: https: //github.com/webdataset/webdataset, 2020. 12
2308.01390#48
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
49
12 [35] Together.xyz. Releasing 3b and 7b redpajama- incite including base, of models instruction-tuned & chat models. https://www. together.xyz/blog/redpajama-models-v1, 2023. [36] Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4566–4575, 2014. [37] Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021. [38] Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xi- aowei Hu, Yumao Lu, Zicheng Liu, and Lijuan Wang. An empirical study of gpt-3 for few-shot knowledge-based vqa. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 3081–3089, 2022.
2308.01390#49
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
50
[39] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large lan- guage models with multimodality. arXiv preprint arXiv:2304.14178, 2023. [40] Peter Young, Alice Lai, Micah Hodosh, and J. Hockenmaier. From image descriptions to visual denotations: New similarity metrics for se- mantic inference over event descriptions. Trans- actions of the Association for Computational Lin- guistics, 2:67–78, 2014. [41] Renrui Zhang, Jiaming Han, Aojun Zhou, Xi- angfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and Yu Qiao. Llama-adapter: Efficient fine-tuning of language models with zero-init at- tention. arXiv preprint arXiv:2303.16199, 2023.
2308.01390#50
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
51
[42] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Vic- toria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022. 13 [43] Yanli Zhao, Andrew Gu, Rohan Varma, Liang Luo, Chien-Chin Huang, Min Xu, Less Wright, Hamid Shojanazeri, Myle Ott, Sam Shleifer, et al. Pytorch fsdp: experiences on scaling arXiv preprint fully sharded data parallel. arXiv:2304.11277, 2023. [44] Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, and Jianfeng Gao. Unified vision-language pre-training for image captioning and vqa. arXiv preprint arXiv:1909.11059, 2019.
2308.01390#51
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
52
[45] Wanrong Zhu, Jack Hessel, Anas Awadalla, Samir Yitzhak Gadre, Jesse Dodge, Alex Fang, Youngjae Yu, Ludwig Schmidt, William Yang Wang, and Yejin Choi. Multimodal c4: An open, billion-scale corpus of images interleaved with text. arXiv preprint arXiv:2304.06939, 2023. Table 8: Fine-tuned state-of-the-art numbers used in this report. Method Dataset Score mPLUG [21] Unified VLP [44] Pali-17B [6] Pali-17B [6] Pali-17B [6] Pali-17B [6] Hate-CLIPper [17] HatefulMemes COCO Flickr-30K VQAv2 OK-VQA TextVQA VizWiz 155.1 67.4 84.3 64.5 73.1 73.3 85.8 # A Extended results Table 11 provides full evaluation results for 0, 4, 8, 16, and 32 in-context examples. For ease of comparison to Flamingo, we calculate each OpenFlamingo model’s performance as a fraction of corresponding Flamingo performance in Figure 11. # A.1 Comparison to fine-tuned SoTAs
2308.01390#52
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
53
# A.1 Comparison to fine-tuned SoTAs In Figure 9, we compare OpenFlamingo models to fine-tuned SoTA performances for different numbers of in-context examples. The fine-tuned methods used were pulled from PapersWithCode on 06/19/23 (Table 8). 70% 65% 60% 55% 50% Aggregated % of fine-tuned SoTA o 4 8 16 32 Number of in-context examples OF-3B — OF-3B (I) OF-4B —— OF-4B (I) Flamingo-3B OF-9B Flamingo-9B OF-3B — OF-3B (I) OF-4B —— OF-4B (I) ---- Flamingo-3B OF-9B ---- Flamingo-9B Figure 9: We plot each model’s performance relative to fine-tuned state-of-the-art performance, averaged across datasets.
2308.01390#53
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
54
Figure 9: We plot each model’s performance relative to fine-tuned state-of-the-art performance, averaged across datasets. Benchmark COCO Flickr-30K VQAv2 OK-VQA TextVQA VizWiz Shots Random 89.0 99.5 59.5 65.8 62.9 61.3 54.8 53.3 40.1 42.4 28.2 23.8 27.5 44.0 54.0 53.8 4 32 0 4 8 32 4 32 4 32 4 32 4 32 4 32 RICES 93.1 (+4.1) 109.0 (+9.5) 39.2 (−20.3) 52.2 (−13.6) 58.7 (−4.2) 63.0 (+1.7) 55.1 (+0.3) 56.8 (+3.5) 38.3 (−1.8) 46.3 (+3.9) 34.2 (+6) 31.1 (+7.3) 41.0 (+13.5) 46.4 (+2.4) 70.1 (+16.1) 73.6 (+19.8) HatefulMemes Table 9: Using RICES [38] to select in-context exam- ples often outperforms using random demonstrations. Scores in table are for OpenFlamingo-9B.
2308.01390#54
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
55
Table 9: Using RICES [38] to select in-context exam- ples often outperforms using random demonstrations. Scores in table are for OpenFlamingo-9B. # A.2 Evaluations using RICES In the main text, we evaluate OpenFlamingo by se- lecting in-context examples uniformly at random. In this appendix, we include additional evaluation results using Retrieval-based In-Context Example Selection (RICES) [38]. For a given test example, RICES selects the top-k most similar training examples as demonstra- tions, where similarity is measured by cosine similarity of the images according to the frozen vision encoder (CLIP ViT-L/14). Full results with RICES are listed in Table 12 and illustrated in Figure 10.
2308.01390#55
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
56
In Table 9, we compare OpenFlamingo-9B perfor- mance using RICES to performance using randomly selected in-context examples. We observe that RICES significantly boosts performance in most evaluation settings, including by 19.2 ROC AUC using 32 shots on HatefulMemes. However, on Flickr-30K, we ob- serve significant degradations from using RICES: CIDEr degrades by 20.4 in 0-shot evaluations2 and 13.1 in 4-shot evaluations. We hypothesize that the demonstrations RICES selects in Flickr-30K are more similar to the test example than in other datasets. This leads OpenFlamingo-9B to parrot captions from the in-context examples, including incorrect details. For an example, see Table 10 in Appendix A. 2In 0-shot evaluations, RICES is still used to select the two text-only examples used for the prompt (§3.4). 14 Evaluations with RICES
2308.01390#56
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
57
2In 0-shot evaluations, RICES is still used to select the two text-only examples used for the prompt (§3.4). 14 Evaluations with RICES COCO Flickr30K HatefulMemes OK-VQA 110 . 50 t 70 z 100 iS) © 45 4 gee bs =) 3 WP a = 65 © S40" Q 90 e) < is) 9 60 <= 35 80 o 55 > 30 048 #16 32 048 16 32 048 16 32 TextVQA VQAv2 (Average) 35 a > 60 * 45 > 8 60 oO oO ‘f _ oO g £ 30 £ eo £ 8 EI 555 ka 5 40 8 55 o o , o 3 Oo Oo é f Oo oO 25 tog Y 35 B50 S$ S S$ 5 o S 20 > 45 > 30 245 048 16 32 048 16 32 048 16 32 Number of in-context examples --- Flamingo-3B Flamingo-9B —— OF-3B —— OF-3B (I) — OF4B —— OF-4B (I) — OF-9B Figure 10: Evaluation results per dataset across 0, 4, 8, 16, and 32 in-context examples. Results are reported in tabular form in Table 12. # B Additional notes on filtering MMC4
2308.01390#57
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
58
# B Additional notes on filtering MMC4 When training contrastive vision-language models, filtering image-text pairs by CLIP cosine similarity has proven particularly helpful for improving data quality [31, 9]. We use a similar notion for filtering interleaved sequences in MMC4: if an image and its matched sentence had cosine similarities that fell below a fixed threshold (0.24), according to CLIP ViT-L/14 embeddings, we omitted the image from the sequence, keeping the text. If all images in a sequence are omitted, we discard the sequence entirely. This aims to ensure that images are relevant to the text following it. However, increasing the image-text similarity thresh- old has a side effect: it reduces the typical number of images per interleaved sequence. When using simi- larity 0.32, nearly 58% of a sample of 1,000 MMC4 sequences contain only 1 image per sequence, com- pared to 38% in Figure 4, which uses a threshold of 0.24. Training with long sequences may be important for producing models that can handle a large amount of in-context examples. Further, we estimate that 88.7% of MMC4 sequences are discarded completely when filtering with threshold 0.32, compared to 42.7% with threshold 0.24.
2308.01390#58
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
59
with threshold 0.24. As future work, we are interested in understanding how to balance length, quality, and dataset size ob- jectives to improve OpenFlamingo models. # C Synthetic data prompt We provide the prompt used to generate the ChatGPT- generated data (see §3.2) in Table 12. After generating candidate sequences, we query LAION-5B using [4] to infill images. For each unique caption we generate, we attempt to retrieve 10 candidate images from the in- dex using index=laion5B-L-14, aesthetic score=9, and aesthetic weight=0.5. After this search, we re-rank the retrieved images using CLIP ViT-L/16@336px and select the image with the highest similarity to interleave. # D Image credits We include the links to the images we used in Figure 2 in Table 13. 15
2308.01390#59
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
60
# D Image credits We include the links to the images we used in Figure 2 in Table 13. 15 Random demonstrations RICES A person hanging from a telephone pole near the mountains. The brown dog is running through the grass with a yellow toy in its mouth. Demos A trio of male musicians are performing with one playing a guitar and singing into a micro- phone, another holding a harmonica, and the third playing a bass guitar. A white dog rushes down a dirt path sur- rounded by grass and trees. Two men, both in strange hats, working over rocks in a busy urban street. The tan dog is carrying a green squeak toy in its mouth. Several people are in a group where a man in a blue shirt is smiling. A yellow dog running through a yard covered in leaves while holding a yellow toy in his mouth. Test example OF-9B generations: A yellow labrador retriever running with a ball. A yellow dog running through a yard cov- ered in leaves while holding a green squeak toy in his mouth Ground truth: A white dog fetching a yellow toy.
2308.01390#60
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
62
Table 10: Comparison of OpenFlamingo-9B outputs for a Flickr-30K 4-shot evaluation using RICES vs. random demonstrations. With RICES, OpenFlamingo-9B patches together these demonstration captions to answer for the test image, including incorrect details. 16 OpenFlamingo performance as fraction of Flamingo performance OF-3B OF-3B (1) OF-4B OF-4B (I) OF-9B 0 9 4 8 8 © 16 1.2 32 Mf g 8 1.1 16 m 32 3 g 0 1.0 o 4 28 G16 @ 32 x 0.9 “ 0 g 4 S 8 16 fo) 32 0.8 0 S 4 o Zz 8 0.7 * @ 16 32 0 ga 0.6 aS 8 S 16 32 0 0.5 N 4 FR 8 ' S16 0.57 32 0.51 OF-3B OF-3B(I) OF4B OF4B(I) OF-9B Figure 11: OpenFlamingo performance as a fraction of corresponding Flamingo performance for each evaluation setting. We compare OpenFlamingo-3B and -4B models to Flamingo-3B, and OpenFlamingo-9B to Flamingo-9B. 0, 4, 8, 16, and 32 refer to the number of in-context examples used. 17
2308.01390#62
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
63
Benchmark COCO [7] Flickr-30K [40] VQAv2 [2] OK-VQA [26] TextVQA [33] VizWiz [11] Shots Fl-3B Fl-9B 73.0 85.0 90.6 95.4 99.0 60.6 72.0 71.7 73.4 71.2 49.2 53.2 55.4 56.7 57.1 41.2 43.3 44.6 45.6 45.9 30.1 32.7 32.4 31.8 30.6 28.9 34.0 38.4 43.3 45.5 53.7 53.6 54.7 55.3 56.3 0 4 8 16 32 0 4 8 16 32 0 4 8 16 32 0 4 8 16 32 0 4 8 16 32 0 4 8 16 32 0 4 8 16 32 79.4 93.1 99.0 102.2 106.3 61.5 72.6 73.4 72.7 72.8 51.8 56.3 58.0 59.4 60.4 44.7 49.3 50.0 50.8 51.0 31.8 33.6 33.6 33.5 32.6 28.8 34.9 39.4 43.0 44.0 57.0 62.7 63.9 64.5 63.5
2308.01390#63
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
64
33.6 33.5 32.6 28.8 34.9 39.4 43.0 44.0 57.0 62.7 63.9 64.5 63.5 OF-3B OF-3B (I) OF-4B OF-4B (I) 81.2 (0.3) 74.9 (0.2) 85.8 (0.5) 77.3 (0.3) 94.8 (0.2) 85.9 (0.6) 98.0 (0.3) 89.8 (0.2) 99.2 (0.3) 93.0 (0.6) 55.6 (1.3) 52.3 (1.0) 61.2 (0.5) 57.2 (0.4) 59.0 (1.0) 58.6 (1.1) 54.8 (1.0) 59.2 (0.5) 53.0 (0.5) 61.1 (1.3) 46.9 (0.0) 44.6 (0.0) 49.0 (0.0) 45.8 (0.0) 47.4 (0.0) 46.2 (0.0) 45.1 (0.1) 46.6 (0.0) 47.3 (0.0) 47.0
2308.01390#64
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
65
(0.0) 46.2 (0.0) 45.1 (0.1) 46.6 (0.0) 47.3 (0.0) 47.0 (0.1) 31.7 (0.1) 28.2 (0.2) 34.6 (0.0) 30.3 (0.5) 33.7 (0.2) 31.1 (0.3) 31.3 (0.1) 30.9 (0.3) 34.7 (0.3) 31.0 (0.1) 21.1 (0.4) 24.2 (0.2) 27.2 (0.3) 27.0 (0.3) 25.1 (0.2) 27.7 (0.1) 23.2 (0.1) 28.0 (0.2) 23.2 (0.2) 28.3 (0.2) 21.5 (0.2) 23.7 (0.5) 26.5 (0.4) 27.0 (0.3) 29.1 (0.2) 32.1 (0.7) 31.0 (0.6) 36.1 (0.3) 31.3 (0.2) 39.8 (0.1) 53.1 (2.2) 51.2
2308.01390#65
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
66
(0.6) 36.1 (0.3) 31.3 (0.2) 39.8 (0.1) 53.1 (2.2) 51.2 (2.5) 54.9 (1.1) 50.6 (0.8) 58.5 (0.3) 52.0 (1.1) 56.9 (1.5) 48.5 (0.7) 54.9 (1.1) 50.2 (1.8) 74.4 (0.6) 82.7 (0.7) 87.8 (0.5) 91.9 (0.3) 94.8 (0.3) 51.2 (0.2) 59.1 (0.3) 60.7 (0.6) 63.0 (0.4) 64.5 (1.3) 44.1 (0.1) 45.7 (0.1) 45.9 (0.1) 45.8 (0.0) 44.8 (0.1) 28.7 (0.1) 30.6 (0.2) 31.5 (0.3) 30.7 (0.3) 30.6 (0.1) 23.1 (0.2) 28.1 (0.4) 29.1 (0.1) 29.1
2308.01390#66
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
67
(0.3) 30.6 (0.1) 23.1 (0.2) 28.1 (0.4) 29.1 (0.1) 29.1 (0.1) 28.5 (0.1) 23.4 (0.3) 27.7 (0.1) 32.1 (0.6) 35.3 (0.1) 39.3 (0.4) 50.1 (2.2) 49.5 (0.6) 50.7 (1.8) 48.7 (1.0) 47.8 (2.2) 76.7 (0.2) 81.8 (0.4) 90.7 (0.3) 93.9 (0.4) 95.1 (0.3) 53.6 (0.9) 60.7 (1.2) 55.9 (1.3) 56.8 (0.5) 56.9 (0.7) 45.1 (0.1) 49.0 (0.0) 48.3 (0.0) 45.5 (0.1) 43.0 (0.2) 30.7 (0.1) 35.1 (0.0) 33.9 (0.1) 28.5 (0.2) 26.4 (0.2) 21.0
2308.01390#67
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
68
(0.1) 35.1 (0.0) 33.9 (0.1) 28.5 (0.2) 26.4 (0.2) 21.0 (0.3) 25.9 (0.0) 21.3 (0.2) 18.2 (0.4) 14.1 (0.2) 18.8 (0.1) 26.6 (0.5) 28.8 (0.4) 24.6 (0.2) 23.1 (1.1) 52.3 (2.3) 51.5 (1.4) 55.2 (0.8) 54.5 (1.3) 52.2 (1.2) OF-9B 79.5 (0.2) 89.0 (0.3) 96.3 (0.1) 98.8 (0.7) 99.5 (0.1) 59.5 (1.0) 65.8 (0.6) 62.9 (1.0) 62.8 (1.0) 61.3 (0.7) 52.7 (0.2) 54.8 (0.0) 54.8 (0.0) 54.3 (0.0) 53.3 (0.1) 37.8 (0.2) 40.1 (0.1)
2308.01390#68
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]
2308.01390
71
Benchmark Shots Fl-3B Fl-9B OF-3B OF-3B (I) OF-4B OF-4B (I) OF-9B COCO [7] Flickr-30K [40] VQAv2 [2] OK-VQA [26] TextVQA [33] VizWiz [11] 0 4 8 16 32 0 4 8 16 32 0 4 8 16 32 0 4 8 16 32 0 4 8 16 32 0 4 8 16 32 0 4 8 16 32 73.0 85.0 90.6 95.4 99.0 60.6 72.0 71.7 73.4 71.2 49.2 53.2 55.4 56.7 57.1 41.2 43.3 44.6 45.6 45.9 30.1 32.7 32.4 31.8 30.6 28.9 34.0 38.4 43.3 45.5 53.7 53.6 54.7 55.3 56.3 79.4 93.1 99.0 102.2 106.3 61.5 72.6 73.4 72.7 72.8 51.8 56.3 58.0 59.4 60.4 44.7 49.3 50.0 50.8 51.0 31.8 33.6 33.6 33.5 32.6 28.8
2308.01390#71
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
We introduce OpenFlamingo, a family of autoregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce an open-source replication of DeepMind's Flamingo models. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of corresponding Flamingo performance. This technical report describes our models, training data, hyperparameters, and evaluation suite. We share our models and code at https://github.com/mlfoundations/open_flamingo.
http://arxiv.org/pdf/2308.01390
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
cs.CV, cs.AI, cs.LG
null
null
cs.CV
20230802
20230807
[ { "id": "1909.11059" }, { "id": "2306.05425" }, { "id": "2108.07258" }, { "id": "2205.12005" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2210.08402" }, { "id": "2306.16527" }, { "id": "1504.00325" }, { "id": "2303.03378" }, { "id": "2209.06794" }, { "id": "2304.11277" }, { "id": "2305.04790" }, { "id": "2303.08774" }, { "id": "2304.08485" }, { "id": "2302.14045" }, { "id": "2005.04790" }, { "id": "2301.12597" }, { "id": "2201.07520" }, { "id": "2301.13823" }, { "id": "2305.03726" }, { "id": "2303.16199" }, { "id": "2304.14108" }, { "id": "2304.06939" }, { "id": "2207.07635" }, { "id": "2109.01652" }, { "id": "2210.05916" } ]