Having multiple frameworks available to use with đ€ Transformers gives you flexibility to play their strengths when designing your application, but it implies that compatibility must be added on a per-model basis. The good news is that adding TensorFlow compatibility to an existing model is simpler than adding a new model from scratch! Whether you wish to have a deeper understanding of large TensorFlow models, make a major open-source contribution, or enable TensorFlow for your model of choice, this guide is for you.
This guide empowers you, a member of our community, to contribute TensorFlow model weights and/or architectures to be used in đ€ Transformers, with minimal supervision from the Hugging Face team. Writing a new model is no small feat, but hopefully this guide will make it less of a rollercoaster đą and more of a walk in the park đ¶. Harnessing our collective experiences is absolutely critical to make this process increasingly easier, and thus we highly encourage that you suggest improvements to this guide!
Before you dive deeper, it is recommended that you check the following resources if youâre new to đ€ Transformers:
In the remainder of this guide, you will learn whatâs needed to add a new TensorFlow model architecture, the procedure to convert PyTorch into TensorFlow model weights, and how to efficiently debug mismatches across ML frameworks. Letâs get started!
Are you unsure whether the model you wish to use already has a corresponding TensorFlow architecture?
Â
Check the model_type
field of the config.json
of your model of choice
(example). If the corresponding model folder in
đ€ Transformers has a file whose name starts with âmodeling_tfâ, it means that it has a corresponding TensorFlow
architecture (example).
There are many ways to design a large model architecture, and multiple ways of implementing said design. However, you might recall from our general overview of đ€ Transformers that we are an opinionated bunch - the ease of use of đ€ Transformers relies on consistent design choices. From experience, we can tell you a few important things about adding TensorFlow models:
Hereâs an overview of the steps needed to add a TensorFlow model architecture:
1. Select the model you wish to convert
Letâs start off with the basics: the first thing you need to know is the architecture you want to convert. If you donât have your eyes set on a specific architecture, asking the đ€ Transformers team for suggestions is a great way to maximize your impact - we will guide you towards the most prominent architectures that are missing on the TensorFlow side. If the specific model you want to use with TensorFlow already has a TensorFlow architecture implementation in đ€ Transformers but is lacking weights, feel free to jump straight into the weight conversion section of this page.
For simplicity, the remainder of this guide assumes youâve decided to contribute with the TensorFlow version of BrandNewBert (the same example as in the guide to add a new model from scratch).
Before starting the work on a TensorFlow model architecture, double-check that there is no ongoing effort to do so.
You can search for BrandNewBert
on the
pull request GitHub page to confirm that there is no
TensorFlow-related pull request.
2. Prepare transformers dev environment
Having selected the model architecture, open an draft PR to signal your intention to work on it. Follow the instructions below to set up your environment and open a draft PR.
Fork the repository by clicking on the âForkâ button on the repositoryâs page. This creates a copy of the code under your GitHub user account.
Clone your transformers
fork to your local disk, and add the base repository as a remote:
git clone https://github.com/[your Github handle]/transformers.git
cd transformers
git remote add upstream https://github.com/huggingface/transformers.git
python -m venv .env
source .env/bin/activate
pip install -e ".[dev]"
Note: You donât need to have CUDA installed. Making the new model work on CPU is sufficient.
git checkout -b add_tf_brand_new_bert
git fetch upstream git rebase upstream/main
Add an empty .py
file in transformers/src/models/brandnewbert/
named modeling_tf_brandnewbert.py
. This will
be your TensorFlow model file.
Push the changes to your account using:
git add .
git commit -m "initial commit"
git push -u origin add_tf_brand_new_bert
Once you are satisfied, go to the webpage of your fork on GitHub. Click on âPull requestâ. Make sure to add the GitHub handle of some members of the Hugging Face team as reviewers, so that the Hugging Face team gets notified for future changes.
Change the PR into a draft by clicking on âConvert to draftâ on the right of the GitHub pull request web page.
Now you have set up a development environment to port BrandNewBert to TensorFlow in đ€ Transformers.
3. (Optional) Understand theoretical aspects and the existing implementation
You should take some time to read BrandNewBertâs paper, if such descriptive work exists. There might be large sections of the paper that are difficult to understand. If this is the case, this is fine - donât worry! The goal is not to get a deep theoretical understanding of the paper, but to extract the necessary information required to effectively re-implement the model in đ€ Transformers using TensorFlow. That being said, you donât have to spend too much time on the theoretical aspects, but rather focus on the practical ones, namely the existing model documentation page (e.g. model docs for BERT).
After youâve grasped the basics of the models you are about to implement, itâs important to understand the existing implementation. This is a great chance to confirm that a working implementation matches your expectations for the model, as well as to foresee technical challenges on the TensorFlow side.
Itâs perfectly natural that you feel overwhelmed with the amount of information that youâve just absorbed. It is definitely not a requirement that you understand all facets of the model at this stage. Nevertheless, we highly encourage you to clear any pressing questions in our forum.
Now itâs time to finally start coding. Our suggested starting point is the PyTorch file itself: copy the contents of
modeling_brand_new_bert.py
inside src/transformers/models/brand_new_bert/
into
modeling_tf_brand_new_bert.py
. The goal of this section is to modify the file and update the import structure of
đ€ Transformers such that you can import TFBrandNewBert
and
TFBrandNewBert.from_pretrained(model_repo, from_pt=True)
sucessfully loads a working TensorFlow BrandNewBert model.
Sadly, there is no prescription to convert a PyTorch model into TensorFlow. You can, however, follow our selection of tips to make the process as smooth as possible:
TF
to the name of all classes (e.g. BrandNewBert
becomes TFBrandNewBert
).torch.nn.Linear
corresponds to
tf.keras.layers.Dense
, torch.nn.Dropout
corresponds to tf.keras.layers.Dropout
, etc. If youâre not sure
about a specific operation, you can use the TensorFlow documentation
or the PyTorch documentation.1e-5
in PyTorch
and 1e-3
in TensorFlow).
Double-check the documentation!nn.Parameter
variables typically need to be initialized within TF Layerâs build()
. See the following
example: PyTorch /
TensorFlow#copied from ...
on top of a function, the odds are that your TensorFlow model can also
borrow that function from the architecture it was copied from, assuming it has a TensorFlow architecture.name
attribute correctly in TensorFlow functions is critical to do the from_pt=True
weight
cross-loading. name
is almost always the name of the corresponding variable in the PyTorch code. If name
is not
properly set, you will see it in the error message when loading the model weights.BrandNewBertModel
, will actually reside in TFBrandNewBertMainLayer
, a Keras
layer subclass (example).
TFBrandNewBertModel
will simply be a wrapper around this layer.TFBrandNewBertPreTrainedModel
will need to hold an example of inputs to the model, the dummy_inputs
(example).In addition to the model file itself, you will also need to add the pointers to the model classes and related documentation pages. You can complete this part entirely following the patterns in other PRs (example). Hereâs a list of the needed manual changes:
src/transformers/__init__.py
src/transformers/models/auto/modeling_tf_auto.py
utils/documentation_tests.txt
src/transformers/utils/dummy_tf_objects.py
src/transformers/models/brand_new_bert/__init__.py
docs/source/en/model_doc/brand_new_bert.mdx
docs/source/en/model_doc/brand_new_bert.mdx
docs/source/en/index.mdx
When youâre happy with your implementation, run the following checklist to confirm that your model architecture is ready:
training
argument, which is
propagated all the way from the top-level classes#copied from ...
whenever possibleTFBrandNewBertMainLayer
and all classes that use it have their call
function decorated with @unpack_inputs
TFBrandNewBertMainLayer
is decorated with @keras_serializable
TFBrandNewBert.from_pretrained(model_repo, from_pt=True)
Hurray, youâve implemented a TensorFlow model! Now itâs time to add tests to make sure that your model behaves as
expected. As in the previous section, we suggest you start by copying the test_modeling_brand_new_bert.py
file in
tests/models/brand_new_bert/
into test_modeling_tf_brand_new_bert.py
, and continue by making the necessary
TensorFlow replacements. For now, in all .from_pretrained()
calls, you should use the from_pt=True
flag to load
the existing PyTorch weights.
After youâre done, itâs time for the moment of truth: run the tests! đŹ
NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py
The most likely outcome is that youâll see a bunch of errors. Donât worry, this is expected! Debugging ML models is
notoriously hard, and the key ingredient to success is patience (and breakpoint()
). In our experience, the hardest
problems arise from subtle mismatches between ML frameworks, for which we have a few pointers at the end of this guide.
In other cases, a general test might not be directly applicable to your model, in which case we suggest an override
at the model test class level. Regardless of the issue, donât hesitate to ask for help in your draft pull request if
youâre stuck.
When all tests pass, congratulations, your model is nearly ready to be added to the đ€ Transformers library! đ
6. Submit the pull request
Once youâre done with the implementation and the tests, itâs time to submit a pull request. Before pushing your code,
run our code formatting utility, make fixup
đȘ. This will automatically fix any formatting issues, which would cause
our automatic checks to fail.
Itâs now time to convert your draft pull request into a real pull request. To do so, click on the âReady for
reviewâ button and add Joao (@gante
) and Matt (@Rocketknight1
) as reviewers. A model pull request will need
at least 3 reviewers, but they will take care of finding appropriate additional reviewers for your model.
After all reviewers are happy with the state of your PR, the final action point is to remove the from_pt=True
flag in
.from_pretrained()
calls. Since there are no TensorFlow weights, you will have to add them! Check the section
below for instructions on how to do it.
Finally, when the TensorFlow weights get merged, you have at least 3 reviewer approvals, and all CI checks are green, double-check the tests locally one last time
NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py
and we will merge your PR! Congratulations on the milestone đ
7. (Optional) Build demos and share with the world
One of the hardest parts about open-source is discovery. How can the other users learn about the existence of your fabulous TensorFlow contribution? With proper communication, of course! đŁ
There are two main ways to share your model with the community:
Assuming that the TensorFlow model architecture is available in đ€ Transformers, converting PyTorch weights into TensorFlow weights is a breeze!
Hereâs how to do it:
huggingface-cli login
(you can find your access tokens here)transformers-cli pt-to-tf --model-name foo/bar
, where foo/bar
is the name of the model repository
containing the PyTorch weights you want to convert@joaogante
and @Rocketknight1
in the đ€ Hub PR the command above has just createdThatâs it! đ
At some point, when adding a new architecture or when creating TensorFlow weights for an existing architecture, you might come across errors compaining about mismatches between PyTorch and TensorFlow. You might even decide to open the model architecture code for the two frameworks, and find that they look identical. Whatâs going on? đ€
First of all, letâs talk about why understanding these mismatches matters. Many community members will use đ€
Transformers models out of the box, and trust that our models behave as expected. When there is a large mismatch
between the two frameworks, it implies that the model is not following the reference implementation for at least one
of the frameworks. This might lead to silent failures, in which the model runs but has poor performance. This is
arguably worse than a model that fails to run at all! To that end, we aim at having a framework mismatch smaller than
1e-5
at all stages of the model.
As in other numerical problems, the devil is in the details. And as in any detail-oriented craft, the secret ingredient here is patience. Here is our suggested workflow for when you come across this type of issues:
breakpoint()
statements in the two frameworksâ architectures, and compare the values of the
numerical variables in a top-down fashion until you find the source of the problems.In some cases, in dicussion with the đ€ Transformers team, we might find that the fixing the mismatch is infeasible.
When the mismatch is very small in the output layers of the model (but potentially large in the hidden states), we
might decide to ignore it in favor of distributing the model. The pt-to-tf
CLI mentioned above has a --max-error
flag to override the error message at weight conversion time.