Spaces:
Paused
Paused
| <!--Copyright 2022 The HuggingFace Team. All rights reserved. | |
| Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | |
| the License. You may obtain a copy of the License at | |
| http://www.apache.org/licenses/LICENSE-2.0 | |
| Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | |
| an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | |
| specific language governing permissions and limitations under the License. | |
| โ ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be | |
| rendered properly in your Markdown viewer. | |
| --> | |
| # ๐ค Accelerate๋ฅผ ํ์ฉํ ๋ถ์ฐ ํ์ต[[distributed-training-with-accelerate]] | |
| ๋ชจ๋ธ์ด ์ปค์ง๋ฉด์ ๋ณ๋ ฌ ์ฒ๋ฆฌ๋ ์ ํ๋ ํ๋์จ์ด์์ ๋ ํฐ ๋ชจ๋ธ์ ํ๋ จํ๊ณ ํ๋ จ ์๋๋ฅผ ๋ช ๋ฐฐ๋ก ๊ฐ์ํํ๊ธฐ ์ํ ์ ๋ต์ผ๋ก ๋ฑ์ฅํ์ต๋๋ค. Hugging Face์์๋ ์ฌ์ฉ์๊ฐ ํ๋์ ๋จธ์ ์ ์ฌ๋ฌ ๊ฐ์ GPU๋ฅผ ์ฌ์ฉํ๋ ์ฌ๋ฌ ๋จธ์ ์ ์ฌ๋ฌ ๊ฐ์ GPU๋ฅผ ์ฌ์ฉํ๋ ๋ชจ๋ ์ ํ์ ๋ถ์ฐ ์ค์ ์์ ๐ค Transformers ๋ชจ๋ธ์ ์ฝ๊ฒ ํ๋ จํ ์ ์๋๋ก ๋๊ธฐ ์ํด [๐ค Accelerate](https://huggingface.co/docs/accelerate) ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ฅผ ๋ง๋ค์์ต๋๋ค. ์ด ํํ ๋ฆฌ์ผ์์๋ ๋ถ์ฐ ํ๊ฒฝ์์ ํ๋ จํ ์ ์๋๋ก ๊ธฐ๋ณธ PyTorch ํ๋ จ ๋ฃจํ๋ฅผ ์ปค์คํฐ๋ง์ด์ฆํ๋ ๋ฐฉ๋ฒ์ ์์๋ด ์๋ค. | |
| ## ์ค์ [[setup]] | |
| ๐ค Accelerate ์ค์น ์์ํ๊ธฐ: | |
| ```bash | |
| pip install accelerate | |
| ``` | |
| ๊ทธ ๋ค์, [`~accelerate.Accelerator`] ๊ฐ์ฒด๋ฅผ ๋ถ๋ฌ์ค๊ณ ์์ฑํฉ๋๋ค. [`~accelerate.Accelerator`]๋ ์๋์ผ๋ก ๋ถ์ฐ ์ค์ ์ ํ์ ๊ฐ์งํ๊ณ ํ๋ จ์ ํ์ํ ๋ชจ๋ ๊ตฌ์ฑ ์์๋ฅผ ์ด๊ธฐํํฉ๋๋ค. ์ฅ์น์ ๋ชจ๋ธ์ ๋ช ์์ ์ผ๋ก ๋ฐฐ์นํ ํ์๋ ์์ต๋๋ค. | |
| ```py | |
| >>> from accelerate import Accelerator | |
| >>> accelerator = Accelerator() | |
| ``` | |
| ## ๊ฐ์ํ๋ฅผ ์ํ ์ค๋น[[prepare-to-accelerate]] | |
| ๋ค์ ๋จ๊ณ๋ ๊ด๋ จ๋ ๋ชจ๋ ํ๋ จ ๊ฐ์ฒด๋ฅผ [`~accelerate.Accelerator.prepare`] ๋ฉ์๋์ ์ ๋ฌํ๋ ๊ฒ์ ๋๋ค. ์ฌ๊ธฐ์๋ ํ๋ จ ๋ฐ ํ๊ฐ ๋ฐ์ดํฐ๋ก๋, ๋ชจ๋ธ ๋ฐ ์ตํฐ๋ง์ด์ ๊ฐ ํฌํจ๋ฉ๋๋ค: | |
| ```py | |
| >>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( | |
| ... train_dataloader, eval_dataloader, model, optimizer | |
| ... ) | |
| ``` | |
| ## ๋ฐฑ์๋(Backward)[[backward]] | |
| ๋ง์ง๋ง์ผ๋ก ํ๋ จ ๋ฃจํ์ ์ผ๋ฐ์ ์ธ `loss.backward()`๋ฅผ ๐ค Accelerate์ [`~accelerate.Accelerator.backward`] ๋ฉ์๋๋ก ๋์ฒดํ๊ธฐ๋ง ํ๋ฉด ๋ฉ๋๋ค: | |
| ```py | |
| >>> for epoch in range(num_epochs): | |
| ... for batch in train_dataloader: | |
| ... outputs = model(**batch) | |
| ... loss = outputs.loss | |
| ... accelerator.backward(loss) | |
| ... optimizer.step() | |
| ... lr_scheduler.step() | |
| ... optimizer.zero_grad() | |
| ... progress_bar.update(1) | |
| ``` | |
| ๋ค์ ์ฝ๋์์ ๋ณผ ์ ์๋ฏ์ด, ํ๋ จ ๋ฃจํ์ ์ฝ๋ ๋ค ์ค๋ง ์ถ๊ฐํ๋ฉด ๋ถ์ฐ ํ์ต์ ํ์ฑํํ ์ ์์ต๋๋ค! | |
| ```diff | |
| + from accelerate import Accelerator | |
| from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler | |
| + accelerator = Accelerator() | |
| model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) | |
| optimizer = AdamW(model.parameters(), lr=3e-5) | |
| - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") | |
| - model.to(device) | |
| + train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( | |
| + train_dataloader, eval_dataloader, model, optimizer | |
| + ) | |
| num_epochs = 3 | |
| num_training_steps = num_epochs * len(train_dataloader) | |
| lr_scheduler = get_scheduler( | |
| "linear", | |
| optimizer=optimizer, | |
| num_warmup_steps=0, | |
| num_training_steps=num_training_steps | |
| ) | |
| progress_bar = tqdm(range(num_training_steps)) | |
| model.train() | |
| for epoch in range(num_epochs): | |
| for batch in train_dataloader: | |
| - batch = {k: v.to(device) for k, v in batch.items()} | |
| outputs = model(**batch) | |
| loss = outputs.loss | |
| - loss.backward() | |
| + accelerator.backward(loss) | |
| optimizer.step() | |
| lr_scheduler.step() | |
| optimizer.zero_grad() | |
| progress_bar.update(1) | |
| ``` | |
| ## ํ์ต[[train]] | |
| ๊ด๋ จ ์ฝ๋๋ฅผ ์ถ๊ฐํ ํ์๋ ์คํฌ๋ฆฝํธ๋ Colaboratory์ ๊ฐ์ ๋ ธํธ๋ถ์์ ํ๋ จ์ ์์ํ์ธ์. | |
| ### ์คํฌ๋ฆฝํธ๋ก ํ์ตํ๊ธฐ[[train-with-a-script]] | |
| ์คํฌ๋ฆฝํธ์์ ํ๋ จ์ ์คํํ๋ ๊ฒฝ์ฐ, ๋ค์ ๋ช ๋ น์ ์คํํ์ฌ ๊ตฌ์ฑ ํ์ผ์ ์์ฑํ๊ณ ์ ์ฅํฉ๋๋ค: | |
| ```bash | |
| accelerate config | |
| ``` | |
| Then launch your training with: | |
| ```bash | |
| accelerate launch train.py | |
| ``` | |
| ### ๋ ธํธ๋ถ์ผ๋ก ํ์ตํ๊ธฐ[[train-with-a-notebook]] | |
| Collaboratory์ TPU๋ฅผ ์ฌ์ฉํ๋ ค๋ ๊ฒฝ์ฐ, ๋ ธํธ๋ถ์์๋ ๐ค Accelerate๋ฅผ ์คํํ ์ ์์ต๋๋ค. ํ๋ จ์ ๋ด๋นํ๋ ๋ชจ๋ ์ฝ๋๋ฅผ ํจ์๋ก ๊ฐ์ธ์ [`~accelerate.notebook_launcher`]์ ์ ๋ฌํ์ธ์: | |
| ```py | |
| >>> from accelerate import notebook_launcher | |
| >>> notebook_launcher(training_function) | |
| ``` | |
| ๐ค Accelerate ๋ฐ ๋ค์ํ ๊ธฐ๋ฅ์ ๋ํ ์์ธํ ๋ด์ฉ์ [documentation](https://huggingface.co/docs/accelerate)๋ฅผ ์ฐธ์กฐํ์ธ์. |