Our aim is to train a large model in a decentralized fashion on consumer hardware or low-end cloud instances. This means we need to make the model, dataset, and other memory buffers fit onto a few GB of disk, 12-16 GB of CPU RAM, and 8-12 GB of GPU memory. Unfortunately, this rules out many popular techniques such as ZeRO-Offload: there is simply not enough RAM for that. Instead, we must make better use of what limited memory we have. To do this, we use two techniques: 8-bit Optimizers for GPU memory and dataset streaming for RAM & HDD.

8-bit Optimizers: Using optimizers such as LAMB or Adam requires four times as much GPU memory as simply storing model parameters (8 bytes vs 2 bytes). As such, for training large models with many parameters the optimizers make up the largest chunk of memory. With 8-bit optimizers this memory is reduced by 75% (2 bytes) making it much easier to fit large models onto consumer GPUs.

Naturally, we can combine this technique with offloading: storing 8-bit optimizer states in CPU memory rather than GPU memory (0 bytes GPU, 2 bytes CPU). To perform an optimizer update, we transfer the GPU gradients to the CPU, perform the optimizer update, and then transfer the updated weights to the GPU. We can do this for each weight one-by-one so that additional CPU memory required for the optimizer update is minimal. The combination of offloading and 8-bit optimizers means that we conserve GPU memory (0 bytes per parameter) and also use only a limited amount of CPU memory (2 bytes per parameter).

Dataset Streaming Usually data is stored on disk and needs to be fully or partially loaded into CPU memory to be used for training. Large datasets used for pre-training measure in hundreds of gigabytes or even terabytes. This can pose a significant problem, as most desktop and cheap cloud instance simply do not have that much space. Furthermore, downloading the dataset over the internet would take up hours before one can even begin training.

To circumvent these problems, we stream the training dataset in the same way as you stream online videos. Participants download a small random portion of the training dataset and immediately begin training on it, while additional data is loaded in background. As such, we can train a model with virtually no memory overhead from the dataset and switching to a new dataset is as simple as changing an argument to the dataset class.

Here's a tutorial for using these techniques:

In this section, we discuss common concerns related to security of the collaborative training.

Q: If I join a collaborative training, do I allow other people to execute code on my computer?

A: During the training, participants only exchange data (gradients, statistics, model weights) and never send code to each other. No other peer can execute code on your computer.

To join the training, you typically need to run the code (implementing the model, data streaming, training loop, etc.) from a repository or a Colab notebook provided by the authors of the experiment. This is no different from running any other open source project/Colab notebook.

Q: Can a malicious participant influence the training outcome?

A: It is indeed possible unless we use some defense mechanism. For instance, a malicious participant can damage model weights by sending large numbers instead of the correct gradients. The same can happen due to broken hardware or misconfiguration.

  • One possible defense is using authentication combined with model checkpointing. In this case, participants should log in (e.g. with their Hugging Face account) to interact with the rest of the collaboration. In turn, moderators can screen potential participants and add them to an allowlist. If something goes wrong (e.g. if a participant sends invalid gradients and the model diverges), the moderators remove them from the list and revert the model to the latest checkpoint unaffected by the attack.

    Nice bonus: using this data, the moderators can acknowledge the personal contribution of each participant.

  • Another defense is replacing the naive averaging of the peers' gradients with an aggregation technique robust to outliers. Karimireddy et al. (2020) suggested such a technique (named CenteredClip) and proved that it does not significantly affect the model's convergence.

    In our case, CenteredClip is useful but not enough to protect from malicious participants, since it implies that the CenteredClip procedure itself is performed by a trusted server. In contrast, in our decentralized system, all participants can aggregate a part of the gradients and we cannot assume all of them to be trusted.

    Recently, Gorbunov et al. (2021) proposed a robust aggregation protocol for decentralized systems that does not require this assumption. This protocol uses CenteredClip as a subroutine but is able to detect and ban participants who performed it incorrectly.

In this section, we provide a roadmap for you to run the collaborative training yourself.

Got confused? Feel free to ask any questions at our Discord!

  1. Set up dataset streaming:
    • Upload your dataset to Hugging Face Hub in a streaming-friendly format (example).
    • Set up dataset streaming (see the "Efficient Training" section).
  2. Write code of training peers (example):
    • Implement your model, set up dataset streaming, and write the training loop.
    • Get familiar with the hivemind library (e.g., via the quickstart).
    • In the training loop, wrap up your PyTorch optimizer with hivemind.Optimizer (example).
  3. (optional) Write code of auxiliary peers (example):
    • Auxiliary peers a special kind of peers responsible for logging loss and other metrics (e.g., to Weights & Biases) and uploading model checkpoints (e.g., to Hugging Face Hub).
    • Such peers don't need to calculate gradients and may be run on cheap machines without GPUs.
    • They can serve as a convenient entry point to hivemind.DHT (i.e., their address can be specified as initial_peers).
    • It is useful to fix their address by providing host_maddrs and identity_path arguments to hivemind.DHT (these are forwarded to the underlying libp2p daemon).
  4. (optional) Make it easier for other people to join:
    • Create notebooks for free GPU providers (Google Colab, Kaggle, AWS SageMaker, etc.). People may run them online and/or download and run them on their own hardware.
    • Create a Hugging Face organization with all resources related to the training (dataset, model, inference demo, links to a dashboard with loss and other metrics, etc.). Look at ours as an example.
    • Set up an authentication system (see the "Security" section). For example, you can ask people to join your organization with their Hugging Face accounts (Hugging Face allows to share a link for joining or manually approve new participants). This allows you to screen participants, acknowledge their contributions (e.g., make a leaderboard), and ban accounts who behave maliciously.
    • Set up an inference demo for your model (e.g., using Spaces) or a script that periodically uploads the inference results to show the training progress.