text
stringlengths 2
11.8k
|
---|
Now instantiate your DataCollatorForCTCWithPadding:
data_collator = DataCollatorCTCWithPadding(processor=processor, padding="longest")
Evaluate
Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 Evaluate library. For this task, load the word error rate (WER) metric (see the 🤗 Evaluate quick tour to learn more about how to load and compute a metric):
import evaluate
wer = evaluate.load("wer") |
import evaluate
wer = evaluate.load("wer")
Then create a function that passes your predictions and labels to [~evaluate.EvaluationModule.compute] to calculate the WER:
import numpy as np
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1) |
import numpy as np
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids)
label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
wer = wer.compute(predictions=pred_str, references=label_str)
return {"wer": wer} |
Your compute_metrics function is ready to go now, and you'll return to it when you setup your training.
Train
If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here!
You're ready to start training your model now! Load Wav2Vec2 with [AutoModelForCTC]. Specify the reduction to apply with the ctc_loss_reduction parameter. It is often better to use the average instead of the default summation: |
from transformers import AutoModelForCTC, TrainingArguments, Trainer
model = AutoModelForCTC.from_pretrained(
"facebook/wav2vec2-base",
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
)
At this point, only three steps remain: |
Define your training hyperparameters in [TrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer] will evaluate the WER and save the training checkpoint.
Pass the training arguments to [Trainer] along with the model, dataset, tokenizer, data collator, and compute_metrics function.
Call [~Trainer.train] to finetune your model. |
training_args = TrainingArguments(
output_dir="my_awesome_asr_mind_model",
per_device_train_batch_size=8,
gradient_accumulation_steps=2,
learning_rate=1e-5,
warmup_steps=500,
max_steps=2000,
gradient_checkpointing=True,
fp16=True,
group_by_length=True,
evaluation_strategy="steps",
per_device_eval_batch_size=8,
save_steps=1000,
eval_steps=1000,
logging_steps=25,
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=encoded_minds["train"],
eval_dataset=encoded_minds["test"],
tokenizer=processor,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
trainer.train() |
Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model:
trainer.push_to_hub()
For a more in-depth example of how to finetune a model for automatic speech recognition, take a look at this blog post for English ASR and this post for multilingual ASR. |
trainer.push_to_hub()
For a more in-depth example of how to finetune a model for automatic speech recognition, take a look at this blog post for English ASR and this post for multilingual ASR.
Inference
Great, now that you've finetuned a model, you can use it for inference!
Load an audio file you'd like to run inference on. Remember to resample the sampling rate of the audio file to match the sampling rate of the model if you need to! |
from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", "en-US", split="train")
dataset = dataset.cast_column("audio", Audio(sampling_rate=16000))
sampling_rate = dataset.features["audio"].sampling_rate
audio_file = dataset[0]["audio"]["path"]
The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for automatic speech recognition with your model, and pass your audio file to it: |
The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for automatic speech recognition with your model, and pass your audio file to it:
from transformers import pipeline
transcriber = pipeline("automatic-speech-recognition", model="stevhliu/my_awesome_asr_minds_model")
transcriber(audio_file)
{'text': 'I WOUD LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'} |
The transcription is decent, but it could be better! Try finetuning your model on more examples to get even better results!
You can also manually replicate the results of the pipeline if you'd like:
Load a processor to preprocess the audio file and transcription and return the input as PyTorch tensors: |
You can also manually replicate the results of the pipeline if you'd like:
Load a processor to preprocess the audio file and transcription and return the input as PyTorch tensors:
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("stevhliu/my_awesome_asr_mind_model")
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
Pass your inputs to the model and return the logits: |
Pass your inputs to the model and return the logits:
from transformers import AutoModelForCTC
model = AutoModelForCTC.from_pretrained("stevhliu/my_awesome_asr_mind_model")
with torch.no_grad():
logits = model(**inputs).logits
Get the predicted input_ids with the highest probability, and use the processor to decode the predicted input_ids back into text: |
Get the predicted input_ids with the highest probability, and use the processor to decode the predicted input_ids back into text:
import torch
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
transcription
['I WOUL LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'] |
Before you begin, make sure you have all the necessary libraries installed:
pip install transformers datasets evaluate
We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:
from huggingface_hub import notebook_login
notebook_login() |
from huggingface_hub import notebook_login
notebook_login()
Load ELI5 dataset
Start by loading the first 5000 examples from the ELI5-Category dataset with the 🤗 Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
from datasets import load_dataset
eli5 = load_dataset("eli5_category", split="train[:5000]") |
from datasets import load_dataset
eli5 = load_dataset("eli5_category", split="train[:5000]")
Split the dataset's train split into a train and test set with the [~datasets.Dataset.train_test_split] method:
eli5 = eli5.train_test_split(test_size=0.2)
Then take a look at an example: |
eli5["train"][0]
{'q_id': '7h191n',
'title': 'What does the tax bill that was passed today mean? How will it affect Americans in each tax bracket?',
'selftext': '',
'category': 'Economics',
'subreddit': 'explainlikeimfive',
'answers': {'a_id': ['dqnds8l', 'dqnd1jl', 'dqng3i1', 'dqnku5x'],
'text': ["The tax bill is 500 pages long and there were a lot of changes still going on right to the end. It's not just an adjustment to the income tax brackets, it's a whole bunch of changes. As such there is no good answer to your question. The big take aways are: - Big reduction in corporate income tax rate will make large companies very happy. - Pass through rate change will make certain styles of business (law firms, hedge funds) extremely happy - Income tax changes are moderate, and are set to expire (though it's the kind of thing that might just always get re-applied without being made permanent) - People in high tax states (California, New York) lose out, and many of them will end up with their taxes raised.",
'None yet. It has to be reconciled with a vastly different house bill and then passed again.',
'Also: does this apply to 2017 taxes? Or does it start with 2018 taxes?',
'This article explains both the House and senate bills, including the proposed changes to your income taxes based on your income level. URL_0'],
'score': [21, 19, 5, 3],
'text_urls': [[],
[],
[],
['https://www.investopedia.com/news/trumps-tax-reform-what-can-be-done/']]},
'title_urls': ['url'],
'selftext_urls': ['url']} |
While this may look like a lot, you're only really interested in the text field. What's cool about language modeling
tasks is you don't need labels (also known as an unsupervised task) because the next word is the label.
Preprocess
The next step is to load a DistilGPT2 tokenizer to process the text subfield:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2") |
The next step is to load a DistilGPT2 tokenizer to process the text subfield:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2")
You'll notice from the example above, the text field is actually nested inside answers. This means you'll need to
extract the text subfield from its nested structure with the flatten method: |
eli5 = eli5.flatten()
eli5["train"][0]
{'q_id': '7h191n',
'title': 'What does the tax bill that was passed today mean? How will it affect Americans in each tax bracket?',
'selftext': '',
'category': 'Economics',
'subreddit': 'explainlikeimfive',
'answers.a_id': ['dqnds8l', 'dqnd1jl', 'dqng3i1', 'dqnku5x'],
'answers.text': ["The tax bill is 500 pages long and there were a lot of changes still going on right to the end. It's not just an adjustment to the income tax brackets, it's a whole bunch of changes. As such there is no good answer to your question. The big take aways are: - Big reduction in corporate income tax rate will make large companies very happy. - Pass through rate change will make certain styles of business (law firms, hedge funds) extremely happy - Income tax changes are moderate, and are set to expire (though it's the kind of thing that might just always get re-applied without being made permanent) - People in high tax states (California, New York) lose out, and many of them will end up with their taxes raised.",
'None yet. It has to be reconciled with a vastly different house bill and then passed again.',
'Also: does this apply to 2017 taxes? Or does it start with 2018 taxes?',
'This article explains both the House and senate bills, including the proposed changes to your income taxes based on your income level. URL_0'],
'answers.score': [21, 19, 5, 3],
'answers.text_urls': [[],
[],
[],
['https://www.investopedia.com/news/trumps-tax-reform-what-can-be-done/']],
'title_urls': ['url'],
'selftext_urls': ['url']} |
Each subfield is now a separate column as indicated by the answers prefix, and the text field is a list now. Instead
of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them.
Here is a first preprocessing function to join the list of strings for each example and tokenize the result:
def preprocess_function(examples):
return tokenizer([" ".join(x) for x in examples["answers.text"]]) |
def preprocess_function(examples):
return tokenizer([" ".join(x) for x in examples["answers.text"]])
To apply this preprocessing function over the entire dataset, use the 🤗 Datasets [~datasets.Dataset.map] method. You can speed up the map function by setting batched=True to process multiple elements of the dataset at once, and increasing the number of processes with num_proc. Remove any columns you don't need: |
tokenized_eli5 = eli5.map(
preprocess_function,
batched=True,
num_proc=4,
remove_columns=eli5["train"].column_names,
)
This dataset contains the token sequences, but some of these are longer than the maximum input length for the model.
You can now use a second preprocessing function to
concatenate all the sequences
split the concatenated sequences into shorter chunks defined by block_size, which should be both shorter than the maximum input length and short enough for your GPU RAM. |
block_size = 128
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= block_size:
total_length = (total_length // block_size) * block_size
# Split by chunks of block_size.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result |
Apply the group_texts function over the entire dataset:
lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4)
Now create a batch of examples using [DataCollatorForLanguageModeling]. It's more efficient to dynamically pad the
sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.
Use the end-of-sequence token as the padding token and set mlm=False. This will use the inputs as labels shifted to the right by one element: |
Use the end-of-sequence token as the padding token and set mlm=False. This will use the inputs as labels shifted to the right by one element:
from transformers import DataCollatorForLanguageModeling
tokenizer.pad_token = tokenizer.eos_token
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
Use the end-of-sequence token as the padding token and set mlm=False. This will use the inputs as labels shifted to the right by one element: |
Use the end-of-sequence token as the padding token and set mlm=False. This will use the inputs as labels shifted to the right by one element:
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False, return_tensors="tf")
Train
If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial!
You're ready to start training your model now! Load DistilGPT2 with [AutoModelForCausalLM]: |
If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial!
You're ready to start training your model now! Load DistilGPT2 with [AutoModelForCausalLM]:
from transformers import AutoModelForCausalLM, TrainingArguments, Trainer
model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
At this point, only three steps remain: |
At this point, only three steps remain:
Define your training hyperparameters in [TrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model).
Pass the training arguments to [Trainer] along with the model, datasets, and data collator.
Call [~Trainer.train] to finetune your model. |
training_args = TrainingArguments(
output_dir="my_awesome_eli5_clm-model",
evaluation_strategy="epoch",
learning_rate=2e-5,
weight_decay=0.01,
push_to_hub=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=lm_dataset["train"],
eval_dataset=lm_dataset["test"],
data_collator=data_collator,
)
trainer.train()
Once training is completed, use the [~transformers.Trainer.evaluate] method to evaluate your model and get its perplexity: |
Once training is completed, use the [~transformers.Trainer.evaluate] method to evaluate your model and get its perplexity:
import math
eval_results = trainer.evaluate()
print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")
Perplexity: 49.61
Then share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model:
trainer.push_to_hub()
If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial! |
trainer.push_to_hub()
If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial!
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
from transformers import create_optimizer, AdamWeightDecay
optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
Then you can load DistilGPT2 with [TFAutoModelForCausalLM]: |
from transformers import create_optimizer, AdamWeightDecay
optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
Then you can load DistilGPT2 with [TFAutoModelForCausalLM]:
from transformers import TFAutoModelForCausalLM
model = TFAutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]: |
Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
tf_train_set = model.prepare_tf_dataset(
lm_dataset["train"],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
tf_test_set = model.prepare_tf_dataset(
lm_dataset["test"],
shuffle=False,
batch_size=16,
collate_fn=data_collator,
) |
Configure the model for training with compile. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to:
import tensorflow as tf
model.compile(optimizer=optimizer) # No loss argument!
This can be done by specifying where to push your model and tokenizer in the [~transformers.PushToHubCallback]: |
import tensorflow as tf
model.compile(optimizer=optimizer) # No loss argument!
This can be done by specifying where to push your model and tokenizer in the [~transformers.PushToHubCallback]:
from transformers.keras_callbacks import PushToHubCallback
callback = PushToHubCallback(
output_dir="my_awesome_eli5_clm-model",
tokenizer=tokenizer,
) |
from transformers.keras_callbacks import PushToHubCallback
callback = PushToHubCallback(
output_dir="my_awesome_eli5_clm-model",
tokenizer=tokenizer,
)
Finally, you're ready to start training your model! Call fit with your training and validation datasets, the number of epochs, and your callback to finetune the model:
model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) |
model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback])
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
For a more in-depth example of how to finetune a model for causal language modeling, take a look at the corresponding
PyTorch notebook
or TensorFlow notebook.
Inference
Great, now that you've finetuned a model, you can use it for inference!
Come up with a prompt you'd like to generate text from: |
Inference
Great, now that you've finetuned a model, you can use it for inference!
Come up with a prompt you'd like to generate text from:
prompt = "Somatic hypermutation allows the immune system to"
The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for text generation with your model, and pass your text to it: |
from transformers import pipeline
generator = pipeline("text-generation", model="username/my_awesome_eli5_clm-model")
generator(prompt)
[{'generated_text': "Somatic hypermutation allows the immune system to be able to effectively reverse the damage caused by an infection.\n\n\nThe damage caused by an infection is caused by the immune system's ability to perform its own self-correcting tasks."}]
Tokenize the text and return the input_ids as PyTorch tensors: |
Tokenize the text and return the input_ids as PyTorch tensors:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("username/my_awesome_eli5_clm-model")
inputs = tokenizer(prompt, return_tensors="pt").input_ids
Use the [~transformers.generation_utils.GenerationMixin.generate] method to generate text.
For more details about the different text generation strategies and parameters for controlling generation, check out the Text generation strategies page. |
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("username/my_awesome_eli5_clm-model")
outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
Decode the generated token ids back into text: |
tokenizer.batch_decode(outputs, skip_special_tokens=True)
["Somatic hypermutation allows the immune system to react to drugs with the ability to adapt to a different environmental situation. In other words, a system of 'hypermutation' can help the immune system to adapt to a different environmental situation or in some cases even a single life. In contrast, researchers at the University of Massachusetts-Boston have found that 'hypermutation' is much stronger in mice than in humans but can be found in humans, and that it's not completely unknown to the immune system. A study on how the immune system"]
``
</pt>
<tf>
Tokenize the text and return theinput_ids` as TensorFlow tensors: |
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("username/my_awesome_eli5_clm-model")
inputs = tokenizer(prompt, return_tensors="tf").input_ids
Use the [~transformers.generation_tf_utils.TFGenerationMixin.generate] method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the Text generation strategies page. |
from transformers import TFAutoModelForCausalLM
model = TFAutoModelForCausalLM.from_pretrained("username/my_awesome_eli5_clm-model")
outputs = model.generate(input_ids=inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
Decode the generated token ids back into text: |
tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Somatic hypermutation allows the immune system to detect the presence of other viruses as they become more prevalent. Therefore, researchers have identified a high proportion of human viruses. The proportion of virus-associated viruses in our study increases with age. Therefore, we propose a simple algorithm to detect the presence of these new viruses in our samples as a sign of improved immunity. A first study based on this algorithm, which will be published in Science on Friday, aims to show that this finding could translate into the development of a better vaccine that is more effective for'] |
LLM prompting guide
[[open-in-colab]]
Large Language Models such as Falcon, LLaMA, etc. are pretrained transformer models initially trained to predict the
next token given some input text. They typically have billions of parameters and have been trained on trillions of
tokens for an extended period of time. As a result, these models become quite powerful and versatile, and you can use
them to solve multiple NLP tasks out of the box by instructing the models with natural language prompts.
Designing such prompts to ensure the optimal output is often called "prompt engineering". Prompt engineering is an
iterative process that requires a fair amount of experimentation. Natural languages are much more flexible and expressive
than programming languages, however, they can also introduce some ambiguity. At the same time, prompts in natural language
are quite sensitive to changes. Even minor modifications in prompts can lead to wildly different outputs.
While there is no exact recipe for creating prompts to match all cases, researchers have worked out a number of best
practices that help to achieve optimal results more consistently.
This guide covers the prompt engineering best practices to help you craft better LLM prompts and solve various NLP tasks.
You'll learn: |
Basics of prompting
Best practices of LLM prompting
Advanced prompting techniques: few-shot prompting and chain-of-thought
When to fine-tune instead of prompting |
Prompt engineering is only a part of the LLM output optimization process. Another essential component is choosing the
optimal text generation strategy. You can customize how your LLM selects each of the subsequent tokens when generating
the text without modifying any of the trainable parameters. By tweaking the text generation parameters, you can reduce
repetition in the generated text and make it more coherent and human-sounding.
Text generation strategies and parameters are out of scope for this guide, but you can learn more about these topics in
the following guides: |
Generation with LLMs
Text generation strategies |
Basics of prompting
Types of models
The majority of modern LLMs are decoder-only transformers. Some examples include: LLaMA,
Llama2, Falcon, GPT2. However, you may encounter
encoder-decoder transformer LLMs as well, for instance, Flan-T5 and BART.
Encoder-decoder-style models are typically used in generative tasks where the output heavily relies on the input, for
example, in translation and summarization. The decoder-only models are used for all other types of generative tasks.
When using a pipeline to generate text with an LLM, it's important to know what type of LLM you are using, because
they use different pipelines.
Run inference with decoder-only models with the text-generation pipeline:
thon |
from transformers import pipeline
import torch
torch.manual_seed(0) # doctest: +IGNORE_RESULT
generator = pipeline('text-generation', model = 'openai-community/gpt2')
prompt = "Hello, I'm a language model"
generator(prompt, max_length = 30)
[{'generated_text': "Hello, I'm a language model expert, so I'm a big believer in the concept that I know very well and then I try to look into"}]
To run inference with an encoder-decoder, use the text2text-generation pipeline:
thon |
To run inference with an encoder-decoder, use the text2text-generation pipeline:
thon
text2text_generator = pipeline("text2text-generation", model = 'google/flan-t5-base')
prompt = "Translate from English to French: I'm very happy to see you"
text2text_generator(prompt)
[{'generated_text': 'Je suis très heureuse de vous rencontrer.'}] |
Base vs instruct/chat models
Most of the recent LLM checkpoints available on 🤗 Hub come in two versions: base and instruct (or chat). For example,
tiiuae/falcon-7b and tiiuae/falcon-7b-instruct.
Base models are excellent at completing the text when given an initial prompt, however, they are not ideal for NLP tasks
where they need to follow instructions, or for conversational use. This is where the instruct (chat) versions come in.
These checkpoints are the result of further fine-tuning of the pre-trained base versions on instructions and conversational data.
This additional fine-tuning makes them a better choice for many NLP tasks.
Let's illustrate some simple prompts that you can use with tiiuae/falcon-7b-instruct
to solve some common NLP tasks.
NLP tasks
First, let's set up the environment: |
pip install -q transformers accelerate
Next, let's load the model with the appropriate pipeline ("text-generation"):
thon
from transformers import pipeline, AutoTokenizer
import torch
torch.manual_seed(0) # doctest: +IGNORE_RESULT
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
) |
Note that Falcon models were trained using the bfloat16 datatype, so we recommend you use the same. This requires a recent
version of CUDA and works best on modern cards. |
Now that we have the model loaded via the pipeline, let's explore how you can use prompts to solve NLP tasks.
Text classification
One of the most common forms of text classification is sentiment analysis, which assigns a label like "positive", "negative",
or "neutral" to a sequence of text. Let's write a prompt that instructs the model to classify a given text (a movie review).
We'll start by giving the instruction, and then specifying the text to classify. Note that instead of leaving it at that, we're
also adding the beginning of the response - "Sentiment: ":
thon |
torch.manual_seed(0) # doctest: +IGNORE_RESULT
prompt = """Classify the text into neutral, negative or positive.
Text: This movie is definitely one of my favorite movies of its kind. The interaction between respectable and morally strong characters is an ode to chivalry and the honor code amongst thieves and policemen.
Sentiment:
"""
sequences = pipe(
prompt,
max_new_tokens=10,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Result: Classify the text into neutral, negative or positive.
Text: This movie is definitely one of my favorite movies of its kind. The interaction between respectable and morally strong characters is an ode to chivalry and the honor code amongst thieves and policemen.
Sentiment:
Positive |
As a result, the output contains a classification label from the list we have provided in the instructions, and it is a correct one!
You may notice that in addition to the prompt, we pass a max_new_tokens parameter. It controls the number of tokens the
model shall generate, and it is one of the many text generation parameters that you can learn about
in Text generation strategies guide. |
Named Entity Recognition
Named Entity Recognition (NER) is a task of finding named entities in a piece of text, such as a person, location, or organization.
Let's modify the instructions in the prompt to make the LLM perform this task. Here, let's also set return_full_text = False
so that output doesn't contain the prompt:
thon |
torch.manual_seed(1) # doctest: +IGNORE_RESULT
prompt = """Return a list of named entities in the text.
Text: The Golden State Warriors are an American professional basketball team based in San Francisco.
Named entities:
"""
sequences = pipe(
prompt,
max_new_tokens=15,
return_full_text = False,
)
for seq in sequences:
print(f"{seq['generated_text']}")
- Golden State Warriors
- San Francisco |
As you can see, the model correctly identified two named entities from the given text.
Translation
Another task LLMs can perform is translation. You can choose to use encoder-decoder models for this task, however, here,
for the simplicity of the examples, we'll keep using Falcon-7b-instruct, which does a decent job. Once again, here's how
you can write a basic prompt to instruct a model to translate a piece of text from English to Italian:
thon |
torch.manual_seed(2) # doctest: +IGNORE_RESULT
prompt = """Translate the English text to Italian.
Text: Sometimes, I've believed as many as six impossible things before breakfast.
Translation:
"""
sequences = pipe(
prompt,
max_new_tokens=20,
do_sample=True,
top_k=10,
return_full_text = False,
)
for seq in sequences:
print(f"{seq['generated_text']}")
A volte, ho creduto a sei impossibili cose prima di colazione. |
Here we've added a do_sample=True and top_k=10 to allow the model to be a bit more flexible when generating output.
Text summarization
Similar to the translation, text summarization is another generative task where the output heavily relies on the input,
and encoder-decoder models can be a better choice. However, decoder-style models can be used for this task as well.
Previously, we have placed the instructions at the very beginning of the prompt. However, the very end of the prompt can
also be a suitable location for instructions. Typically, it's better to place the instruction on one of the extreme ends.
thon |
torch.manual_seed(3) # doctest: +IGNORE_RESULT
prompt = """Permaculture is a design process mimicking the diversity, functionality and resilience of natural ecosystems. The principles and practices are drawn from traditional ecological knowledge of indigenous cultures combined with modern scientific understanding and technological innovations. Permaculture design provides a framework helping individuals and communities develop innovative, creative and effective strategies for meeting basic needs while preparing for and mitigating the projected impacts of climate change.
Write a summary of the above text.
Summary:
"""
sequences = pipe(
prompt,
max_new_tokens=30,
do_sample=True,
top_k=10,
return_full_text = False,
)
for seq in sequences:
print(f"{seq['generated_text']}")
Permaculture is an ecological design mimicking natural ecosystems to meet basic needs and prepare for climate change. It is based on traditional knowledge and scientific understanding. |
Question answering
For question answering task we can structure the prompt into the following logical components: instructions, context, question, and
the leading word or phrase ("Answer:") to nudge the model to start generating the answer:
thon |
torch.manual_seed(4) # doctest: +IGNORE_RESULT
prompt = """Answer the question using the context below.
Context: Gazpacho is a cold soup and drink made of raw, blended vegetables. Most gazpacho includes stale bread, tomato, cucumbers, onion, bell peppers, garlic, olive oil, wine vinegar, water, and salt. Northern recipes often include cumin and/or pimentón (smoked sweet paprika). Traditionally, gazpacho was made by pounding the vegetables in a mortar with a pestle; this more laborious method is still sometimes used as it helps keep the gazpacho cool and avoids the foam and silky consistency of smoothie versions made in blenders or food processors.
Question: What modern tool is used to make gazpacho?
Answer:
"""
sequences = pipe(
prompt,
max_new_tokens=10,
do_sample=True,
top_k=10,
return_full_text = False,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Result: Modern tools are used, such as immersion blenders |
Reasoning
Reasoning is one of the most difficult tasks for LLMs, and achieving good results often requires applying advanced prompting techniques, like
Chain-of-though.
Let's try if we can make a model reason about a simple arithmetics task with a basic prompt:
thon |
torch.manual_seed(5) # doctest: +IGNORE_RESULT
prompt = """There are 5 groups of students in the class. Each group has 4 students. How many students are there in the class?"""
sequences = pipe(
prompt,
max_new_tokens=30,
do_sample=True,
top_k=10,
return_full_text = False,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Result:
There are a total of 5 groups, so there are 5 x 4=20 students in the class. |
Correct! Let's increase the complexity a little and see if we can still get away with a basic prompt:
thon |
torch.manual_seed(6) # doctest: +IGNORE_RESULT
prompt = """I baked 15 muffins. I ate 2 muffins and gave 5 muffins to a neighbor. My partner then bought 6 more muffins and ate 2. How many muffins do we now have?"""
sequences = pipe(
prompt,
max_new_tokens=10,
do_sample=True,
top_k=10,
return_full_text = False,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Result:
The total number of muffins now is 21 |
This is a wrong answer, it should be 12. In this case, this can be due to the prompt being too basic, or due to the choice
of model, after all we've picked the smallest version of Falcon. Reasoning is difficult for models of all sizes, but larger
models are likely to perform better.
Best practices of LLM prompting
In this section of the guide we have compiled a list of best practices that tend to improve the prompt results: |
When choosing the model to work with, the latest and most capable models are likely to perform better.
Start with a simple and short prompt, and iterate from there.
Put the instructions at the beginning of the prompt, or at the very end. When working with large context, models apply various optimizations to prevent Attention complexity from scaling quadratically. This may make a model more attentive to the beginning or end of a prompt than the middle.
Clearly separate instructions from the text they apply to - more on this in the next section.
Be specific and descriptive about the task and the desired outcome - its format, length, style, language, etc.
Avoid ambiguous descriptions and instructions.
Favor instructions that say "what to do" instead of those that say "what not to do".
"Lead" the output in the right direction by writing the first word (or even begin the first sentence for the model).
Use advanced techniques like Few-shot prompting and Chain-of-thought
Test your prompts with different models to assess their robustness.
Version and track the performance of your prompts. |
Advanced prompting techniques
Few-shot prompting
The basic prompts in the sections above are the examples of "zero-shot" prompts, meaning, the model has been given
instructions and context, but no examples with solutions. LLMs that have been fine-tuned on instruction datasets, generally
perform well on such "zero-shot" tasks. However, you may find that your task has more complexity or nuance, and, perhaps,
you have some requirements for the output that the model doesn't catch on just from the instructions. In this case, you can
try the technique called few-shot prompting.
In few-shot prompting, we provide examples in the prompt giving the model more context to improve the performance.
The examples condition the model to generate the output following the patterns in the examples.
Here's an example:
thon |
torch.manual_seed(0) # doctest: +IGNORE_RESULT
prompt = """Text: The first human went into space and orbited the Earth on April 12, 1961.
Date: 04/12/1961
Text: The first-ever televised presidential debate in the United States took place on September 28, 1960, between presidential candidates John F. Kennedy and Richard Nixon.
Date:"""
sequences = pipe(
prompt,
max_new_tokens=8,
do_sample=True,
top_k=10,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Result: Text: The first human went into space and orbited the Earth on April 12, 1961.
Date: 04/12/1961
Text: The first-ever televised presidential debate in the United States took place on September 28, 1960, between presidential candidates John F. Kennedy and Richard Nixon.
Date: 09/28/1960 |
In the above code snippet we used a single example to demonstrate the desired output to the model, so this can be called a
"one-shot" prompting. However, depending on the task complexity you may need to use more than one example.
Limitations of the few-shot prompting technique:
- While LLMs can pick up on the patterns in the examples, these technique doesn't work well on complex reasoning tasks
- Few-shot prompting requires creating lengthy prompts. Prompts with large number of tokens can increase computation and latency. There's also a limit to the length of the prompts.
- Sometimes when given a number of examples, models can learn patterns that you didn't intend them to learn, e.g. that the third movie review is always negative.
Chain-of-thought
Chain-of-thought (CoT) prompting is a technique that nudges a model to produce intermediate reasoning steps thus improving
the results on complex reasoning tasks.
There are two ways of steering a model to producing the reasoning steps:
- few-shot prompting by illustrating examples with detailed answers to questions, showing the model how to work through a problem.
- by instructing the model to reason by adding phrases like "Let's think step by step" or "Take a deep breath and work through the problem step by step."
If we apply the CoT technique to the muffins example from the reasoning section and use a larger model,
such as (tiiuae/falcon-180B-chat) which you can play with in the HuggingChat,
we'll get a significant improvement on the reasoning result:
text
Let's go through this step-by-step:
1. You start with 15 muffins.
2. You eat 2 muffins, leaving you with 13 muffins.
3. You give 5 muffins to your neighbor, leaving you with 8 muffins.
4. Your partner buys 6 more muffins, bringing the total number of muffins to 14.
5. Your partner eats 2 muffins, leaving you with 12 muffins.
Therefore, you now have 12 muffins.
Prompting vs fine-tuning
You can achieve great results by optimizing your prompts, however, you may still ponder whether fine-tuning a model
would work better for your case. Here are some scenarios when fine-tuning a smaller model may be a preferred option: |
Your domain is wildly different from what LLMs were pre-trained on and extensive prompt optimization did not yield sufficient results.
You need your model to work well in a low-resource language.
You need the model to be trained on sensitive data that is under strict regulations.
You have to use a small model due to cost, privacy, infrastructure or other limitations. |
In all of the above examples, you will need to make sure that you either already have or can easily obtain a large enough
domain-specific dataset at a reasonable cost to fine-tune a model. You will also need to have enough time and resources
to fine-tune a model.
If the above examples are not the case for you, optimizing prompts can prove to be more beneficial. |
Image Feature Extraction
[[open-in-colab]]
Image feature extraction is the task of extracting semantically meaningful features given an image. This has many use cases, including image similarity and image retrieval. Moreover, most computer vision models can be used for image feature extraction, where one can remove the task-specific head (image classification, object detection etc) and get the features. These features are very useful on a higher level: edge detection, corner detection and so on. They may also contain information about the real world (e.g. what a cat looks like) depending on how deep the model is. Therefore, these outputs can be used to train new classifiers on a specific dataset.
In this guide, you will: |
Learn to build a simple image similarity system on top of the image-feature-extraction pipeline.
Accomplish the same task with bare model inference. |
Image Similarity using image-feature-extraction Pipeline
We have two images of cats sitting on top of fish nets, one of them is generated.
thon
from PIL import Image
import requests
img_urls = ["https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/cats.png", "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/cats.jpeg"]
image_real = Image.open(requests.get(img_urls[0], stream=True).raw).convert("RGB")
image_gen = Image.open(requests.get(img_urls[1], stream=True).raw).convert("RGB") |
Let's see the pipeline in action. First, initialize the pipeline. If you don't pass any model to it, the pipeline will be automatically initialized with google/vit-base-patch16-224. If you'd like to calculate similarity, set pool to True.
thon
import torch
from transformers import pipeline
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
pipe = pipeline(task="image-feature-extraction", model_name="google/vit-base-patch16-384", device=DEVICE, pool=True) |
To infer with pipe pass both images to it.
python
outputs = pipe([image_real, image_gen])
The output contains pooled embeddings of those two images.
thon
get the length of a single output
print(len(outputs[0][0]))
show outputs
print(outputs)
768
[[[-0.03909236937761307, 0.43381670117378235, -0.06913255900144577, |
To get the similarity score, we need to pass them to a similarity function.
thon
from torch.nn.functional import cosine_similarity
similarity_score = cosine_similarity(torch.Tensor(outputs[0]),
torch.Tensor(outputs[1]), dim=1)
print(similarity_score)
tensor([0.6043]) |
If you want to get the last hidden states before pooling, avoid passing any value for the pool parameter, as it is set to False by default. These hidden states are useful for training new classifiers or models based on the features from the model.
python
pipe = pipeline(task="image-feature-extraction", model_name="google/vit-base-patch16-224", device=DEVICE)
output = pipe(image_real)
Since the outputs are unpooled, we get the last hidden states where the first dimension is the batch size, and the last two are the embedding shape.
thon
import numpy as np
print(np.array(outputs).shape)
(1, 197, 768) |
Getting Features and Similarities using AutoModel
We can also use AutoModel class of transformers to get the features. AutoModel loads any transformers model with no task-specific head, and we can use this to get the features.
thon
from transformers import AutoImageProcessor, AutoModel
processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
model = AutoModel.from_pretrained("google/vit-base-patch16-224").to(DEVICE) |
Let's write a simple function for inference. We will pass the inputs to the processor first and pass its outputs to the model.
python
def infer(image):
inputs = processor(image, return_tensors="pt").to(DEVICE)
outputs = model(**inputs)
return outputs.pooler_output
We can pass the images directly to this function and get the embeddings.
python
embed_real = infer(image_real)
embed_gen = infer(image_gen)
We can get the similarity again over the embeddings.
thon
from torch.nn.functional import cosine_similarity
similarity_score = cosine_similarity(embed_real, embed_gen, dim=1)
print(similarity_score)
tensor([0.6061], device='cuda:0', grad_fn=)
``` |
TimeSformer
Overview
The TimeSformer model was proposed in TimeSformer: Is Space-Time Attention All You Need for Video Understanding? by Facebook Research.
This work is a milestone in action-recognition field being the first video transformer. It inspired many transformer based video understanding and classification papers.
The abstract from the paper is the following:
We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named "TimeSformer," adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Our experimental study compares different self-attention schemes and suggests that "divided attention," where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically new design, TimeSformer achieves state-of-the-art results on several action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks, our model is faster to train, it can achieve dramatically higher test efficiency (at a small drop in accuracy), and it can also be applied to much longer video clips (over one minute long). Code and models are available at: this https URL.
This model was contributed by fcakyon.
The original code can be found here.
Usage tips
There are many pretrained variants. Select your pretrained model based on the dataset it is trained on. Moreover,
the number of input frames per clip changes based on the model size so you should consider this parameter while selecting your pretrained model.
Resources |
Video classification task guide
TimesformerConfig
[[autodoc]] TimesformerConfig
TimesformerModel
[[autodoc]] TimesformerModel
- forward
TimesformerForVideoClassification
[[autodoc]] TimesformerForVideoClassification
- forward |
DeBERTa-v2
Overview
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google's
BERT model released in 2018 and Facebook's RoBERTa model released in 2019.
It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in
RoBERTa.
The abstract from the paper is the following:
Recent progress in pre-trained neural language models has significantly improved the performance of many natural
language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with
disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the
disentangled attention mechanism, where each word is represented using two vectors that encode its content and
position, respectively, and the attention weights among words are computed using disentangled matrices on their
contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to
predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency
of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of
the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9%
(90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and
pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.
The following information is visible directly on the original implementation
repository. DeBERTa v2 is the second version of the DeBERTa model. It includes
the 1.5B model used for the SuperGLUE single-model submission and achieving 89.9, versus human baseline 89.8. You can
find more details about this submission in the authors'
blog
New in v2: |
Vocabulary In v2 the tokenizer is changed to use a new vocabulary of size 128K built from the training data.
Instead of a GPT2-based tokenizer, the tokenizer is now
sentencepiece-based tokenizer.
nGiE(nGram Induced Input Encoding) The DeBERTa-v2 model uses an additional convolution layer aside with the first
transformer layer to better learn the local dependency of input tokens.
Sharing position projection matrix with content projection matrix in attention layer Based on previous
experiments, this can save parameters without affecting the performance.
Apply bucket to encode relative positions The DeBERTa-v2 model uses log bucket to encode relative positions
similar to T5.
900M model & 1.5B model Two additional model sizes are available: 900M and 1.5B, which significantly improves the
performance of downstream tasks. |
This model was contributed by DeBERTa. This model TF 2.0 implementation was
contributed by kamalkraj. The original code can be found here.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide |
DebertaV2Config
[[autodoc]] DebertaV2Config
DebertaV2Tokenizer
[[autodoc]] DebertaV2Tokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
DebertaV2TokenizerFast
[[autodoc]] DebertaV2TokenizerFast
- build_inputs_with_special_tokens
- create_token_type_ids_from_sequences |
DebertaV2Model
[[autodoc]] DebertaV2Model
- forward
DebertaV2PreTrainedModel
[[autodoc]] DebertaV2PreTrainedModel
- forward
DebertaV2ForMaskedLM
[[autodoc]] DebertaV2ForMaskedLM
- forward
DebertaV2ForSequenceClassification
[[autodoc]] DebertaV2ForSequenceClassification
- forward
DebertaV2ForTokenClassification
[[autodoc]] DebertaV2ForTokenClassification
- forward
DebertaV2ForQuestionAnswering
[[autodoc]] DebertaV2ForQuestionAnswering
- forward
DebertaV2ForMultipleChoice
[[autodoc]] DebertaV2ForMultipleChoice
- forward |
TFDebertaV2Model
[[autodoc]] TFDebertaV2Model
- call
TFDebertaV2PreTrainedModel
[[autodoc]] TFDebertaV2PreTrainedModel
- call
TFDebertaV2ForMaskedLM
[[autodoc]] TFDebertaV2ForMaskedLM
- call
TFDebertaV2ForSequenceClassification
[[autodoc]] TFDebertaV2ForSequenceClassification
- call
TFDebertaV2ForTokenClassification
[[autodoc]] TFDebertaV2ForTokenClassification
- call
TFDebertaV2ForQuestionAnswering
[[autodoc]] TFDebertaV2ForQuestionAnswering
- call
TFDebertaV2ForMultipleChoice
[[autodoc]] TFDebertaV2ForMultipleChoice
- call |
Chinese-CLIP
Overview
The Chinese-CLIP model was proposed in Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou.
Chinese-CLIP is an implementation of CLIP (Radford et al., 2021) on a large-scale dataset of Chinese image-text pairs. It is capable of performing cross-modal retrieval and also playing as a vision backbone for vision tasks like zero-shot image classification, open-domain object detection, etc. The original Chinese-CLIP code is released at this link.
The abstract from the paper is the following:
The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining. In this work, we construct a large-scale dataset of image-text pairs in Chinese, where most data are retrieved from publicly available datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5 Chinese CLIP models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, where the model is first trained with the image encoder frozen and then trained with all parameters being optimized, to achieve enhanced model performance. Our comprehensive experiments demonstrate that Chinese CLIP can achieve the state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive performance in zero-shot image classification based on the evaluation on the ELEVATER benchmark (Li et al., 2022). Our codes, pretrained models, and demos have been released.
The Chinese-CLIP model was contributed by OFA-Sys.
Usage example
The code snippet below shows how to compute image & text features and similarities:
thon |
from PIL import Image
import requests
from transformers import ChineseCLIPProcessor, ChineseCLIPModel
model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
processor = ChineseCLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
Squirtle, Bulbasaur, Charmander, Pikachu in English
texts = ["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"]
compute image feature
inputs = processor(images=image, return_tensors="pt")
image_features = model.get_image_features(**inputs)
image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize
compute text features
inputs = processor(text=texts, padding=True, return_tensors="pt")
text_features = model.get_text_features(**inputs)
text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize
compute image-text similarity scores
inputs = processor(text=texts, images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # probs: [[1.2686e-03, 5.4499e-02, 6.7968e-04, 9.4355e-01]] |
Currently, following scales of pretrained Chinese-CLIP models are available on 🤗 Hub:
OFA-Sys/chinese-clip-vit-base-patch16
OFA-Sys/chinese-clip-vit-large-patch14
OFA-Sys/chinese-clip-vit-large-patch14-336px
OFA-Sys/chinese-clip-vit-huge-patch14 |
Subsets and Splits