80% 1x4 Block Sparse BERT-Large (uncased) Prune OFA

This model is was created using Prune OFA method described in Prune Once for All: Sparse Pre-Trained Language Models presented in ENLSP NeurIPS Workshop 2021.

For further details on the model and its result, see our paper and our implementation available here.

Downloads last month
111
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Datasets used to train Intel/bert-large-uncased-sparse-80-1x4-block-pruneofa

Collection including Intel/bert-large-uncased-sparse-80-1x4-block-pruneofa