text
stringlengths
2
11.8k
Configure the model for training with compile. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: model.compile(optimizer=optimizer) # No loss argument! The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using Keras callbacks. Pass your compute_metrics function to [~transformers.KerasMetricCallback]:
from transformers.keras_callbacks import KerasMetricCallback metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) Specify where to push your model and tokenizer in the [~transformers.PushToHubCallback]: from transformers.keras_callbacks import PushToHubCallback push_to_hub_callback = PushToHubCallback( output_dir="my_awesome_model", tokenizer=tokenizer, ) Then bundle your callbacks together: callbacks = [metric_callback, push_to_hub_callback]
Then bundle your callbacks together: callbacks = [metric_callback, push_to_hub_callback] Finally, you're ready to start training your model! Call fit with your training and validation datasets, the number of epochs, and your callbacks to finetune the model: model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2, callbacks=callbacks) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2, callbacks=callbacks) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for multiple choice, take a look at the corresponding PyTorch notebook or TensorFlow notebook. Inference Great, now that you've finetuned a model, you can use it for inference! Come up with some text and two candidate answers:
Inference Great, now that you've finetuned a model, you can use it for inference! Come up with some text and two candidate answers: prompt = "France has a bread law, Le DΓ©cret Pain, with strict rules on what is allowed in a traditional baguette." candidate1 = "The law does not apply to croissants and brioche." candidate2 = "The law applies to baguettes." Tokenize each prompt and candidate answer pair and return PyTorch tensors. You should also create some labels:
Tokenize each prompt and candidate answer pair and return PyTorch tensors. You should also create some labels: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model") inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="pt", padding=True) labels = torch.tensor(0).unsqueeze(0) Pass your inputs and labels to the model and return the logits:
Pass your inputs and labels to the model and return the logits: from transformers import AutoModelForMultipleChoice model = AutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model") outputs = model(**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels) logits = outputs.logits Get the class with the highest probability: predicted_class = logits.argmax().item() predicted_class '0' Tokenize each prompt and candidate answer pair and return TensorFlow tensors:
Get the class with the highest probability: predicted_class = logits.argmax().item() predicted_class '0' Tokenize each prompt and candidate answer pair and return TensorFlow tensors: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model") inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="tf", padding=True) Pass your inputs to the model and return the logits:
Pass your inputs to the model and return the logits: from transformers import TFAutoModelForMultipleChoice model = TFAutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model") inputs = {k: tf.expand_dims(v, 0) for k, v in inputs.items()} outputs = model(inputs) logits = outputs.logits Get the class with the highest probability: predicted_class = int(tf.math.argmax(logits, axis=-1)[0]) predicted_class '0'
Before you begin, make sure you have all the necessary libraries installed: pip install transformers datasets evaluate We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: from huggingface_hub import notebook_login notebook_login()
from huggingface_hub import notebook_login notebook_login() Load SQuAD dataset Start by loading a smaller subset of the SQuAD dataset from the πŸ€— Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset. from datasets import load_dataset squad = load_dataset("squad", split="train[:5000]") Split the dataset's train split into a train and test set with the [~datasets.Dataset.train_test_split] method:
from datasets import load_dataset squad = load_dataset("squad", split="train[:5000]") Split the dataset's train split into a train and test set with the [~datasets.Dataset.train_test_split] method: squad = squad.train_test_split(test_size=0.2) Then take a look at an example:
squad["train"][0] {'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']}, 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.', 'id': '5733be284776f41900661182', 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?', 'title': 'University_of_Notre_Dame' }
There are several important fields here: answers: the starting location of the answer token and the answer text. context: background information from which the model needs to extract the answer. question: the question a model should answer. Preprocess The next step is to load a DistilBERT tokenizer to process the question and context fields: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased")
from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") There are a few preprocessing steps particular to question answering tasks you should be aware of:
Some examples in a dataset may have a very long context that exceeds the maximum input length of the model. To deal with longer sequences, truncate only the context by setting truncation="only_second". Next, map the start and end positions of the answer to the original context by setting return_offset_mapping=True. With the mapping in hand, now you can find the start and end tokens of the answer. Use the [~tokenizers.Encoding.sequence_ids] method to find which part of the offset corresponds to the question and which corresponds to the context.
Here is how you can create a function to truncate and map the start and end tokens of the answer to the context: def preprocess_function(examples): questions = [q.strip() for q in examples["question"]] inputs = tokenizer( questions, examples["context"], max_length=384, truncation="only_second", return_offsets_mapping=True, padding="max_length", )
offset_mapping = inputs.pop("offset_mapping") answers = examples["answers"] start_positions = [] end_positions = [] for i, offset in enumerate(offset_mapping): answer = answers[i] start_char = answer["answer_start"][0] end_char = answer["answer_start"][0] + len(answer["text"][0]) sequence_ids = inputs.sequence_ids(i) # Find the start and end of the context idx = 0 while sequence_ids[idx] != 1: idx += 1 context_start = idx while sequence_ids[idx] == 1: idx += 1 context_end = idx - 1 # If the answer is not fully inside the context, label it (0, 0) if offset[context_start][0] > end_char or offset[context_end][1] < start_char: start_positions.append(0) end_positions.append(0) else: # Otherwise it's the start and end token positions idx = context_start while idx <= context_end and offset[idx][0] <= start_char: idx += 1 start_positions.append(idx - 1) idx = context_end while idx >= context_start and offset[idx][1] >= end_char: idx -= 1 end_positions.append(idx + 1) inputs["start_positions"] = start_positions inputs["end_positions"] = end_positions return inputs
To apply the preprocessing function over the entire dataset, use πŸ€— Datasets [~datasets.Dataset.map] function. You can speed up the map function by setting batched=True to process multiple elements of the dataset at once. Remove any columns you don't need: tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad["train"].column_names)
tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad["train"].column_names) Now create a batch of examples using [DefaultDataCollator]. Unlike other data collators in πŸ€— Transformers, the [DefaultDataCollator] does not apply any additional preprocessing such as padding. from transformers import DefaultDataCollator data_collator = DefaultDataCollator() </pt> <tf>py from transformers import DefaultDataCollator data_collator = DefaultDataCollator(return_tensors="tf") Train
Train If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here! You're ready to start training your model now! Load DistilBERT with [AutoModelForQuestionAnswering]: from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer model = AutoModelForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased") At this point, only three steps remain:
At this point, only three steps remain: Define your training hyperparameters in [TrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). Pass the training arguments to [Trainer] along with the model, dataset, tokenizer, and data collator. Call [~Trainer.train] to finetune your model.
training_args = TrainingArguments( output_dir="my_awesome_qa_model", evaluation_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=3, weight_decay=0.01, push_to_hub=True, ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_squad["train"], eval_dataset=tokenized_squad["test"], tokenizer=tokenizer, data_collator=data_collator, ) trainer.train()
Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model: trainer.push_to_hub() If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial here! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: from transformers import create_optimizer batch_size = 16 num_epochs = 2 total_train_steps = (len(tokenized_squad["train"]) // batch_size) * num_epochs optimizer, schedule = create_optimizer( init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps, ) Then you can load DistilBERT with [TFAutoModelForQuestionAnswering]:
Then you can load DistilBERT with [TFAutoModelForQuestionAnswering]: from transformers import TFAutoModelForQuestionAnswering model = TFAutoModelForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased") Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]: tf_train_set = model.prepare_tf_dataset( tokenized_squad["train"], shuffle=True, batch_size=16, collate_fn=data_collator, ) tf_validation_set = model.prepare_tf_dataset( tokenized_squad["test"], shuffle=False, batch_size=16, collate_fn=data_collator, ) Configure the model for training with compile:
Configure the model for training with compile: import tensorflow as tf model.compile(optimizer=optimizer) The last thing to setup before you start training is to provide a way to push your model to the Hub. This can be done by specifying where to push your model and tokenizer in the [~transformers.PushToHubCallback]: from transformers.keras_callbacks import PushToHubCallback callback = PushToHubCallback( output_dir="my_awesome_qa_model", tokenizer=tokenizer, )
from transformers.keras_callbacks import PushToHubCallback callback = PushToHubCallback( output_dir="my_awesome_qa_model", tokenizer=tokenizer, ) Finally, you're ready to start training your model! Call fit with your training and validation datasets, the number of epochs, and your callback to finetune the model: model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=[callback])
model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=[callback]) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for question answering, take a look at the corresponding PyTorch notebook or TensorFlow notebook.
Evaluate Evaluation for question answering requires a significant amount of postprocessing. To avoid taking up too much of your time, this guide skips the evaluation step. The [Trainer] still calculates the evaluation loss during training so you're not completely in the dark about your model's performance. If have more time and you're interested in how to evaluate your model for question answering, take a look at the Question answering chapter from the πŸ€— Hugging Face Course! Inference Great, now that you've finetuned a model, you can use it for inference! Come up with a question and some context you'd like the model to predict:
question = "How many programming languages does BLOOM support?" context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages." The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for question answering with your model, and pass your text to it:
The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for question answering with your model, and pass your text to it: from transformers import pipeline question_answerer = pipeline("question-answering", model="my_awesome_qa_model") question_answerer(question=question, context=context) {'score': 0.2058267742395401, 'start': 10, 'end': 95, 'answer': '176 billion parameters and can generate text in 46 languages natural languages and 13'}
You can also manually replicate the results of the pipeline if you'd like: Tokenize the text and return PyTorch tensors: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model") inputs = tokenizer(question, context, return_tensors="pt") Pass your inputs to the model and return the logits:
Pass your inputs to the model and return the logits: import torch from transformers import AutoModelForQuestionAnswering model = AutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model") with torch.no_grad(): outputs = model(**inputs) Get the highest probability from the model output for the start and end positions: answer_start_index = outputs.start_logits.argmax() answer_end_index = outputs.end_logits.argmax() Decode the predicted tokens to get the answer:
answer_start_index = outputs.start_logits.argmax() answer_end_index = outputs.end_logits.argmax() Decode the predicted tokens to get the answer: predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] tokenizer.decode(predict_answer_tokens) '176 billion parameters and can generate text in 46 languages natural languages and 13' Tokenize the text and return TensorFlow tensors:
Tokenize the text and return TensorFlow tensors: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model") inputs = tokenizer(question, text, return_tensors="tf") Pass your inputs to the model and return the logits: from transformers import TFAutoModelForQuestionAnswering model = TFAutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model") outputs = model(**inputs) Get the highest probability from the model output for the start and end positions:
Get the highest probability from the model output for the start and end positions: answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) Decode the predicted tokens to get the answer: predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] tokenizer.decode(predict_answer_tokens) '176 billion parameters and can generate text in 46 languages natural languages and 13'
Before you begin, make sure you have all the necessary libraries installed: pip install transformers datasets evaluate We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in: from huggingface_hub import notebook_login notebook_login()
from huggingface_hub import notebook_login notebook_login() Load ELI5 dataset Start by loading the first 5000 examples from the ELI5-Category dataset with the πŸ€— Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset. from datasets import load_dataset eli5 = load_dataset("eli5_category", split="train[:5000]")
from datasets import load_dataset eli5 = load_dataset("eli5_category", split="train[:5000]") Split the dataset's train split into a train and test set with the [~datasets.Dataset.train_test_split] method: eli5 = eli5.train_test_split(test_size=0.2) Then take a look at an example:
eli5["train"][0] {'q_id': '7h191n', 'title': 'What does the tax bill that was passed today mean? How will it affect Americans in each tax bracket?', 'selftext': '', 'category': 'Economics', 'subreddit': 'explainlikeimfive', 'answers': {'a_id': ['dqnds8l', 'dqnd1jl', 'dqng3i1', 'dqnku5x'], 'text': ["The tax bill is 500 pages long and there were a lot of changes still going on right to the end. It's not just an adjustment to the income tax brackets, it's a whole bunch of changes. As such there is no good answer to your question. The big take aways are: - Big reduction in corporate income tax rate will make large companies very happy. - Pass through rate change will make certain styles of business (law firms, hedge funds) extremely happy - Income tax changes are moderate, and are set to expire (though it's the kind of thing that might just always get re-applied without being made permanent) - People in high tax states (California, New York) lose out, and many of them will end up with their taxes raised.", 'None yet. It has to be reconciled with a vastly different house bill and then passed again.', 'Also: does this apply to 2017 taxes? Or does it start with 2018 taxes?', 'This article explains both the House and senate bills, including the proposed changes to your income taxes based on your income level. URL_0'], 'score': [21, 19, 5, 3], 'text_urls': [[], [], [], ['https://www.investopedia.com/news/trumps-tax-reform-what-can-be-done/']]}, 'title_urls': ['url'], 'selftext_urls': ['url']}
While this may look like a lot, you're only really interested in the text field. What's cool about language modeling tasks is you don't need labels (also known as an unsupervised task) because the next word is the label. Preprocess For masked language modeling, the next step is to load a DistilRoBERTa tokenizer to process the text subfield: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("distilbert/distilroberta-base")
from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("distilbert/distilroberta-base") You'll notice from the example above, the text field is actually nested inside answers. This means you'll need to extract the text subfield from its nested structure with the flatten method:
eli5 = eli5.flatten() eli5["train"][0] {'q_id': '7h191n', 'title': 'What does the tax bill that was passed today mean? How will it affect Americans in each tax bracket?', 'selftext': '', 'category': 'Economics', 'subreddit': 'explainlikeimfive', 'answers.a_id': ['dqnds8l', 'dqnd1jl', 'dqng3i1', 'dqnku5x'], 'answers.text': ["The tax bill is 500 pages long and there were a lot of changes still going on right to the end. It's not just an adjustment to the income tax brackets, it's a whole bunch of changes. As such there is no good answer to your question. The big take aways are: - Big reduction in corporate income tax rate will make large companies very happy. - Pass through rate change will make certain styles of business (law firms, hedge funds) extremely happy - Income tax changes are moderate, and are set to expire (though it's the kind of thing that might just always get re-applied without being made permanent) - People in high tax states (California, New York) lose out, and many of them will end up with their taxes raised.", 'None yet. It has to be reconciled with a vastly different house bill and then passed again.', 'Also: does this apply to 2017 taxes? Or does it start with 2018 taxes?', 'This article explains both the House and senate bills, including the proposed changes to your income taxes based on your income level. URL_0'], 'answers.score': [21, 19, 5, 3], 'answers.text_urls': [[], [], [], ['https://www.investopedia.com/news/trumps-tax-reform-what-can-be-done/']], 'title_urls': ['url'], 'selftext_urls': ['url']}
Each subfield is now a separate column as indicated by the answers prefix, and the text field is a list now. Instead of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them. Here is a first preprocessing function to join the list of strings for each example and tokenize the result: def preprocess_function(examples): return tokenizer([" ".join(x) for x in examples["answers.text"]])
def preprocess_function(examples): return tokenizer([" ".join(x) for x in examples["answers.text"]]) To apply this preprocessing function over the entire dataset, use the πŸ€— Datasets [~datasets.Dataset.map] method. You can speed up the map function by setting batched=True to process multiple elements of the dataset at once, and increasing the number of processes with num_proc. Remove any columns you don't need:
tokenized_eli5 = eli5.map( preprocess_function, batched=True, num_proc=4, remove_columns=eli5["train"].column_names, )
This dataset contains the token sequences, but some of these are longer than the maximum input length for the model. You can now use a second preprocessing function to - concatenate all the sequences - split the concatenated sequences into shorter chunks defined by block_size, which should be both shorter than the maximum input length and short enough for your GPU RAM.
block_size = 128 def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can # customize this part to your needs. if total_length >= block_size: total_length = (total_length // block_size) * block_size # Split by chunks of block_size. result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } return result
Apply the group_texts function over the entire dataset: lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) Now create a batch of examples using [DataCollatorForLanguageModeling]. It's more efficient to dynamically pad the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. Use the end-of-sequence token as the padding token and specify mlm_probability to randomly mask tokens each time you iterate over the data:
Use the end-of-sequence token as the padding token and specify mlm_probability to randomly mask tokens each time you iterate over the data: from transformers import DataCollatorForLanguageModeling tokenizer.pad_token = tokenizer.eos_token data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15) Use the end-of-sequence token as the padding token and specify mlm_probability to randomly mask tokens each time you iterate over the data:
Use the end-of-sequence token as the padding token and specify mlm_probability to randomly mask tokens each time you iterate over the data: from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15, return_tensors="tf") Train If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here!
Train If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here! You're ready to start training your model now! Load DistilRoBERTa with [AutoModelForMaskedLM]: from transformers import AutoModelForMaskedLM model = AutoModelForMaskedLM.from_pretrained("distilbert/distilroberta-base") At this point, only three steps remain:
At this point, only three steps remain: Define your training hyperparameters in [TrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). Pass the training arguments to [Trainer] along with the model, datasets, and data collator. Call [~Trainer.train] to finetune your model.
training_args = TrainingArguments( output_dir="my_awesome_eli5_mlm_model", evaluation_strategy="epoch", learning_rate=2e-5, num_train_epochs=3, weight_decay=0.01, push_to_hub=True, ) trainer = Trainer( model=model, args=training_args, train_dataset=lm_dataset["train"], eval_dataset=lm_dataset["test"], data_collator=data_collator, ) trainer.train()
Once training is completed, use the [~transformers.Trainer.evaluate] method to evaluate your model and get its perplexity: import math eval_results = trainer.evaluate() print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") Perplexity: 8.76 Then share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model: trainer.push_to_hub() If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial here!
trainer.push_to_hub() If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial here! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: from transformers import create_optimizer, AdamWeightDecay optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) Then you can load DistilRoBERTa with [TFAutoModelForMaskedLM]:
from transformers import create_optimizer, AdamWeightDecay optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) Then you can load DistilRoBERTa with [TFAutoModelForMaskedLM]: from transformers import TFAutoModelForMaskedLM model = TFAutoModelForMaskedLM.from_pretrained("distilbert/distilroberta-base") Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]: tf_train_set = model.prepare_tf_dataset( lm_dataset["train"], shuffle=True, batch_size=16, collate_fn=data_collator, ) tf_test_set = model.prepare_tf_dataset( lm_dataset["test"], shuffle=False, batch_size=16, collate_fn=data_collator, )
Configure the model for training with compile. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: import tensorflow as tf model.compile(optimizer=optimizer) # No loss argument! This can be done by specifying where to push your model and tokenizer in the [~transformers.PushToHubCallback]:
import tensorflow as tf model.compile(optimizer=optimizer) # No loss argument! This can be done by specifying where to push your model and tokenizer in the [~transformers.PushToHubCallback]: from transformers.keras_callbacks import PushToHubCallback callback = PushToHubCallback( output_dir="my_awesome_eli5_mlm_model", tokenizer=tokenizer, )
from transformers.keras_callbacks import PushToHubCallback callback = PushToHubCallback( output_dir="my_awesome_eli5_mlm_model", tokenizer=tokenizer, ) Finally, you're ready to start training your model! Call fit with your training and validation datasets, the number of epochs, and your callback to finetune the model: model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback])
model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for masked language modeling, take a look at the corresponding PyTorch notebook or TensorFlow notebook.
For a more in-depth example of how to finetune a model for masked language modeling, take a look at the corresponding PyTorch notebook or TensorFlow notebook. Inference Great, now that you've finetuned a model, you can use it for inference! Come up with some text you'd like the model to fill in the blank with, and use the special <mask> token to indicate the blank: text = "The Milky Way is a galaxy."
text = "The Milky Way is a galaxy." The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for fill-mask with your model, and pass your text to it. If you like, you can use the top_k parameter to specify how many predictions to return:
from transformers import pipeline mask_filler = pipeline("fill-mask", "username/my_awesome_eli5_mlm_model") mask_filler(text, top_k=3) [{'score': 0.5150994658470154, 'token': 21300, 'token_str': ' spiral', 'sequence': 'The Milky Way is a spiral galaxy.'}, {'score': 0.07087188959121704, 'token': 2232, 'token_str': ' massive', 'sequence': 'The Milky Way is a massive galaxy.'}, {'score': 0.06434620916843414, 'token': 650, 'token_str': ' small', 'sequence': 'The Milky Way is a small galaxy.'}]
Tokenize the text and return the input_ids as PyTorch tensors. You'll also need to specify the position of the <mask> token: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("username/my_awesome_eli5_mlm_model") inputs = tokenizer(text, return_tensors="pt") mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1] Pass your inputs to the model and return the logits of the masked token:
Pass your inputs to the model and return the logits of the masked token: from transformers import AutoModelForMaskedLM model = AutoModelForMaskedLM.from_pretrained("username/my_awesome_eli5_mlm_model") logits = model(**inputs).logits mask_token_logits = logits[0, mask_token_index, :] Then return the three masked tokens with the highest probability and print them out:
Then return the three masked tokens with the highest probability and print them out: top_3_tokens = torch.topk(mask_token_logits, 3, dim=1).indices[0].tolist() for token in top_3_tokens: print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy. `` </pt> <tf> Tokenize the text and return theinput_idsas TensorFlow tensors. You'll also need to specify the position of the` token:
from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("username/my_awesome_eli5_mlm_model") inputs = tokenizer(text, return_tensors="tf") mask_token_index = tf.where(inputs["input_ids"] == tokenizer.mask_token_id)[0, 1] Pass your inputs to the model and return the logits of the masked token:
Pass your inputs to the model and return the logits of the masked token: from transformers import TFAutoModelForMaskedLM model = TFAutoModelForMaskedLM.from_pretrained("username/my_awesome_eli5_mlm_model") logits = model(**inputs).logits mask_token_logits = logits[0, mask_token_index, :] Then return the three masked tokens with the highest probability and print them out:
Then return the three masked tokens with the highest probability and print them out: top_3_tokens = tf.math.top_k(mask_token_logits, 3).indices.numpy() for token in top_3_tokens: print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy.
Image-to-Image Task Guide [[open-in-colab]] Image-to-Image task is the task where an application receives an image and outputs another image. This has various subtasks, including image enhancement (super resolution, low light enhancement, deraining and so on), image inpainting, and more. This guide will show you how to: - Use an image-to-image pipeline for super resolution task, - Run image-to-image models for same task without a pipeline. Note that as of the time this guide is released, image-to-image pipeline only supports super resolution task. Let's begin by installing the necessary libraries.
pip install transformers We can now initialize the pipeline with a Swin2SR model. We can then infer with the pipeline by calling it with an image. As of now, only Swin2SR models are supported in this pipeline. thon from transformers import pipeline device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') pipe = pipeline(task="image-to-image", model="caidas/swin2SR-lightweight-x2-64", device=device)
Now, let's load an image. thon from PIL import Image import requests url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/cat.jpg" image = Image.open(requests.get(url, stream=True).raw) print(image.size) bash (532, 432) We can now do inference with the pipeline. We will get an upscaled version of the cat image. python upscaled = pipe(image) print(upscaled.size) ```bash (1072, 880)
If you wish to do inference yourself with no pipeline, you can use the Swin2SRForImageSuperResolution and Swin2SRImageProcessor classes of transformers. We will use the same model checkpoint for this. Let's initialize the model and the processor. thon from transformers import Swin2SRForImageSuperResolution, Swin2SRImageProcessor model = Swin2SRForImageSuperResolution.from_pretrained("caidas/swin2SR-lightweight-x2-64").to(device) processor = Swin2SRImageProcessor("caidas/swin2SR-lightweight-x2-64")
pipeline abstracts away the preprocessing and postprocessing steps that we have to do ourselves, so let's preprocess the image. We will pass the image to the processor and then move the pixel values to GPU. thon pixel_values = processor(image, return_tensors="pt").pixel_values print(pixel_values.shape) pixel_values = pixel_values.to(device)
We can now infer the image by passing pixel values to the model. thon import torch with torch.no_grad(): outputs = model(pixel_values) `` Output is an object of typeImageSuperResolutionOutput` that looks like below πŸ‘‡ (loss=None, reconstruction=tensor([[[[0.8270, 0.8269, 0.8275, , 0.7463, 0.7446, 0.7453], [0.8287, 0.8278, 0.8283, , 0.7451, 0.7448, 0.7457], [0.8280, 0.8273, 0.8269, , 0.7447, 0.7446, 0.7452], , [0.5923, 0.5933, 0.5924, , 0.0697, 0.0695, 0.0706], [0.5926, 0.5932, 0.5926, , 0.0673, 0.0687, 0.0705], [0.5927, 0.5914, 0.5922, , 0.0664, 0.0694, 0.0718]]]], device='cuda:0'), hidden_states=None, attentions=None) We need to get the reconstruction and post-process it for visualization. Let's see how it looks like. thon outputs.reconstruction.data.shape torch.Size([1, 3, 880, 1072])
We need to squeeze the output and get rid of axis 0, clip the values, then convert it to be numpy float. Then we will arrange axes to have the shape [1072, 880], and finally, bring the output back to range [0, 255]. thon import numpy as np squeeze, take to CPU and clip the values output = outputs.reconstruction.data.squeeze().cpu().clamp_(0, 1).numpy() rearrange the axes output = np.moveaxis(output, source=0, destination=-1) bring values back to pixel values range output = (output * 255.0).round().astype(np.uint8) Image.fromarray(output)
Before you begin, make sure you have all the necessary libraries installed: pip install -q pytorchvideo transformers evaluate You will use PyTorchVideo (dubbed pytorchvideo) to process and prepare the videos. We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in: from huggingface_hub import notebook_login notebook_login()
from huggingface_hub import notebook_login notebook_login() Load UCF101 dataset Start by loading a subset of the UCF-101 dataset. This will give you a chance to experiment and make sure everything works before spending more time training on the full dataset. from huggingface_hub import hf_hub_download hf_dataset_identifier = "sayakpaul/ucf101-subset" filename = "UCF101_subset.tar.gz" file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset")
After the subset has been downloaded, you need to extract the compressed archive: import tarfile with tarfile.open(file_path) as t: t.extractall(".") At a high level, the dataset is organized like so:
UCF101_subset/ train/ BandMarching/ video_1.mp4 video_2.mp4 Archery video_1.mp4 video_2.mp4 val/ BandMarching/ video_1.mp4 video_2.mp4 Archery video_1.mp4 video_2.mp4 test/ BandMarching/ video_1.mp4 video_2.mp4 Archery video_1.mp4 video_2.mp4 The (sorted) video paths appear like so:
'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi'
You will notice that there are video clips belonging to the same group / scene where group is denoted by g in the video file paths. v_ApplyEyeMakeup_g07_c04.avi and v_ApplyEyeMakeup_g07_c06.avi, for example. For the validation and evaluation splits, you wouldn't want to have video clips from the same group / scene to prevent data leakage. The subset that you are using in this tutorial takes this information into account. Next up, you will derive the set of labels present in the dataset. Also, create two dictionaries that'll be helpful when initializing the model:
label2id: maps the class names to integers. id2label: maps the integers to class names. class_labels = sorted({str(path).split("/")[2] for path in all_video_file_paths}) label2id = {label: i for i, label in enumerate(class_labels)} id2label = {i: label for label, i in label2id.items()} print(f"Unique classes: {list(label2id.keys())}.")
Unique classes: ['ApplyEyeMakeup', 'ApplyLipstick', 'Archery', 'BabyCrawling', 'BalanceBeam', 'BandMarching', 'BaseballPitch', 'Basketball', 'BasketballDunk', 'BenchPress'].
There are 10 unique classes. For each class, there are 30 videos in the training set. Load a model to fine-tune Instantiate a video classification model from a pretrained checkpoint and its associated image processor. The model's encoder comes with pre-trained parameters, and the classification head is randomly initialized. The image processor will come in handy when writing the preprocessing pipeline for our dataset.
from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification model_ckpt = "MCG-NJU/videomae-base" image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt) model = VideoMAEForVideoClassification.from_pretrained( model_ckpt, label2id=label2id, id2label=id2label, ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint ) While the model is loading, you might notice the following warning:
Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [, 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight'] - This IS expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. The warning is telling us we are throwing away some weights (e.g. the weights and bias of the classifier layer) and randomly initializing some others (the weights and bias of a new classifier layer). This is expected in this case, because we are adding a new head for which we don't have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do. Note that this checkpoint leads to better performance on this task as the checkpoint was obtained fine-tuning on a similar downstream task having considerable domain overlap. You can check out this checkpoint which was obtained by fine-tuning MCG-NJU/videomae-base-finetuned-kinetics. Prepare the datasets for training For preprocessing the videos, you will leverage the PyTorchVideo library. Start by importing the dependencies we need.
import pytorchvideo.data from pytorchvideo.transforms import ( ApplyTransformToKey, Normalize, RandomShortSideScale, RemoveKey, ShortSideScale, UniformTemporalSubsample, ) from torchvision.transforms import ( Compose, Lambda, RandomCrop, RandomHorizontalFlip, Resize, )
For the training dataset transformations, use a combination of uniform temporal subsampling, pixel normalization, random cropping, and random horizontal flipping. For the validation and evaluation dataset transformations, keep the same transformation chain except for random cropping and horizontal flipping. To learn more about the details of these transformations check out the official documentation of PyTorchVideo. Use the image_processor associated with the pre-trained model to obtain the following information:
Image mean and standard deviation with which the video frame pixels will be normalized. Spatial resolution to which the video frames will be resized. Start by defining some constants.
Start by defining some constants. mean = image_processor.image_mean std = image_processor.image_std if "shortest_edge" in image_processor.size: height = width = image_processor.size["shortest_edge"] else: height = image_processor.size["height"] width = image_processor.size["width"] resize_to = (height, width) num_frames_to_sample = model.config.num_frames sample_rate = 4 fps = 30 clip_duration = num_frames_to_sample * sample_rate / fps
Now, define the dataset-specific transformations and the datasets respectively. Starting with the training set:
train_transform = Compose( [ ApplyTransformToKey( key="video", transform=Compose( [ UniformTemporalSubsample(num_frames_to_sample), Lambda(lambda x: x / 255.0), Normalize(mean, std), RandomShortSideScale(min_size=256, max_size=320), RandomCrop(resize_to), RandomHorizontalFlip(p=0.5), ] ), ), ] ) train_dataset = pytorchvideo.data.Ucf101( data_path=os.path.join(dataset_root_path, "train"), clip_sampler=pytorchvideo.data.make_clip_sampler("random", clip_duration), decode_audio=False, transform=train_transform, )
The same sequence of workflow can be applied to the validation and evaluation sets:
val_transform = Compose( [ ApplyTransformToKey( key="video", transform=Compose( [ UniformTemporalSubsample(num_frames_to_sample), Lambda(lambda x: x / 255.0), Normalize(mean, std), Resize(resize_to), ] ), ), ] ) val_dataset = pytorchvideo.data.Ucf101( data_path=os.path.join(dataset_root_path, "val"), clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), decode_audio=False, transform=val_transform, ) test_dataset = pytorchvideo.data.Ucf101( data_path=os.path.join(dataset_root_path, "test"), clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), decode_audio=False, transform=val_transform, )
Note: The above dataset pipelines are taken from the official PyTorchVideo example. We're using the pytorchvideo.data.Ucf101() function because it's tailored for the UCF-101 dataset. Under the hood, it returns a pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset object. LabeledVideoDataset class is the base class for all things video in the PyTorchVideo dataset. So, if you want to use a custom dataset not supported off-the-shelf by PyTorchVideo, you can extend the LabeledVideoDataset class accordingly. Refer to the data API documentation to learn more. Also, if your dataset follows a similar structure (as shown above), then using the pytorchvideo.data.Ucf101() should work just fine. You can access the num_videos argument to know the number of videos in the dataset.