text
stringlengths
2
11.8k
By default, all the other modules such as torch.nn.LayerNorm are converted to torch.float16. You can change the data type of these modules with the torch_dtype parameter if you want: import torch from transformers import AutoModelForCausalLM model_4bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_4bit=True, torch_dtype=torch.float32) model_4bit.model.decoder.layers[-1].final_layer_norm.weight.dtype
If you have bitsandbytes>=0.41.3, you can serialize 4-bit models and push them on Hugging Face Hub. Simply call model.push_to_hub() after loading it in 4-bit precision. You can also save the serialized 4-bit models locally with model.save_pretrained() command. Training with 8-bit and 4-bit weights are only supported for training extra parameters.
Training with 8-bit and 4-bit weights are only supported for training extra parameters. You can check your memory footprint with the get_memory_footprint method: py print(model.get_memory_footprint()) Quantized models can be loaded from the [~PreTrainedModel.from_pretrained] method without needing to specify the load_in_8bit or load_in_4bit parameters:
from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("{your_username}/bloom-560m-8bit", device_map="auto") 8-bit Learn more about the details of 8-bit quantization in this blog post!
This section explores some of the specific features of 8-bit models, such as offloading, outlier thresholds, skipping module conversion, and finetuning. Offloading 8-bit models can offload weights between the CPU and GPU to support fitting very large models into memory. The weights dispatched to the CPU are actually stored in float32, and aren't converted to 8-bit. For example, to enable offloading for the bigscience/bloom-1b7 model, start by creating a [BitsAndBytesConfig]:
from transformers import AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True)
Design a custom device map to fit everything on your GPU except for the lm_head, which you'll dispatch to the CPU: py device_map = { "transformer.word_embeddings": 0, "transformer.word_embeddings_layernorm": 0, "lm_head": "cpu", "transformer.h": 0, "transformer.ln_f": 0, } Now load your model with the custom device_map and quantization_config: py model_8bit = AutoModelForCausalLM.from_pretrained( "bigscience/bloom-1b7", device_map=device_map, quantization_config=quantization_config, ) Outlier threshold An "outlier" is a hidden state value greater than a certain threshold, and these values are computed in fp16. While the values are usually normally distributed ([-3.5, 3.5]), this distribution can be very different for large models ([-60, 6] or [6, 60]). 8-bit quantization works well for values ~5, but beyond that, there is a significant performance penalty. A good default threshold value is 6, but a lower threshold may be needed for more unstable models (small models or finetuning). To find the best threshold for your model, we recommend experimenting with the llm_int8_threshold parameter in [BitsAndBytesConfig]:
from transformers import AutoModelForCausalLM, BitsAndBytesConfig model_id = "bigscience/bloom-1b7" quantization_config = BitsAndBytesConfig( llm_int8_threshold=10, ) model_8bit = AutoModelForCausalLM.from_pretrained( model_id, device_map=device_map, quantization_config=quantization_config, )
Skip module conversion For some models, like Jukebox, you don't need to quantize every module to 8-bit which can actually cause instability. With Jukebox, there are several lm_head modules that should be skipped using the llm_int8_skip_modules parameter in [BitsAndBytesConfig]:
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_id = "bigscience/bloom-1b7" quantization_config = BitsAndBytesConfig( llm_int8_skip_modules=["lm_head"], ) model_8bit = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", quantization_config=quantization_config, )
Finetuning With the PEFT library, you can finetune large models like flan-t5-large and facebook/opt-6.7b with 8-bit quantization. You don't need to pass the device_map parameter for training because it'll automatically load your model on a GPU. However, you can still customize the device map with the device_map parameter if you want to (device_map="auto" should only be used for inference). 4-bit Try 4-bit quantization in this notebook and learn more about it's details in this blog post.
Try 4-bit quantization in this notebook and learn more about it's details in this blog post. This section explores some of the specific features of 4-bit models, such as changing the compute data type, using the Normal Float 4 (NF4) data type, and using nested quantization. Compute data type To speedup computation, you can change the data type from float32 (the default value) to bf16 using the bnb_4bit_compute_dtype parameter in [BitsAndBytesConfig]:
import torch from transformers import BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16) Normal Float 4 (NF4) NF4 is a 4-bit data type from the QLoRA paper, adapted for weights initialized from a normal distribution. You should use NF4 for training 4-bit base models. This can be configured with the bnb_4bit_quant_type parameter in the [BitsAndBytesConfig]:
from transformers import BitsAndBytesConfig nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", ) model_nf4 = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=nf4_config)
For inference, the bnb_4bit_quant_type does not have a huge impact on performance. However, to remain consistent with the model weights, you should use the bnb_4bit_compute_dtype and torch_dtype values. Nested quantization Nested quantization is a technique that can save additional memory at no additional performance cost. This feature performs a second quantization of the already quantized weights to save an addition 0.4 bits/parameter. For example, with nested quantization, you can finetune a Llama-13b model on a 16GB NVIDIA T4 GPU with a sequence length of 1024, a batch size of 1, and enabling gradient accumulation with 4 steps.
from transformers import BitsAndBytesConfig double_quant_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, ) model_double_quant = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b", quantization_config=double_quant_config)
Optimum The Optimum library supports quantization for Intel, Furiosa, ONNX Runtime, GPTQ, and lower-level PyTorch quantization functions. Consider using Optimum for quantization if you're using specific and optimized hardware like Intel CPUs, Furiosa NPUs or a model accelerator like ONNX Runtime. Benchmarks To compare the speed, throughput, and latency of each quantization scheme, check the following benchmarks obtained from the optimum-benchmark library. The benchmark was run on a NVIDIA A1000 for the TheBloke/Mistral-7B-v0.1-AWQ and TheBloke/Mistral-7B-v0.1-GPTQ models. These were also tested against the bitsandbytes quantization methods as well as a native fp16 model.
forward peak memory/batch size generate peak memory/batch size generate throughput/batch size forward latency/batch size
The benchmarks indicate AWQ quantization is the fastest for inference, text generation, and has the lowest peak memory for text generation. However, AWQ has the largest forward latency per batch size. For a more detailed discussion about the pros and cons of each quantization method, read the Overview of natively supported quantization schemes in 🤗 Transformers blog post. Fused AWQ modules The TheBloke/Mistral-7B-OpenOrca-AWQ model was benchmarked with batch_size=1 with and without fused modules. Unfused module | Batch Size | Prefill Length | Decode Length | Prefill tokens/s | Decode tokens/s | Memory (VRAM) | |-------------:|-----------------:|----------------:|-------------------:|------------------:|:----------------| | 1 | 32 | 32 | 60.0984 | 38.4537 | 4.50 GB (5.68%) | | 1 | 64 | 64 | 1333.67 | 31.6604 | 4.50 GB (5.68%) | | 1 | 128 | 128 | 2434.06 | 31.6272 | 4.50 GB (5.68%) | | 1 | 256 | 256 | 3072.26 | 38.1731 | 4.50 GB (5.68%) | | 1 | 512 | 512 | 3184.74 | 31.6819 | 4.59 GB (5.80%) | | 1 | 1024 | 1024 | 3148.18 | 36.8031 | 4.81 GB (6.07%) | | 1 | 2048 | 2048 | 2927.33 | 35.2676 | 5.73 GB (7.23%) | Fused module | Batch Size | Prefill Length | Decode Length | Prefill tokens/s | Decode tokens/s | Memory (VRAM) | |-------------:|-----------------:|----------------:|-------------------:|------------------:|:----------------| | 1 | 32 | 32 | 81.4899 | 80.2569 | 4.00 GB (5.05%) | | 1 | 64 | 64 | 1756.1 | 106.26 | 4.00 GB (5.05%) | | 1 | 128 | 128 | 2479.32 | 105.631 | 4.00 GB (5.06%) | | 1 | 256 | 256 | 1813.6 | 85.7485 | 4.01 GB (5.06%) | | 1 | 512 | 512 | 2848.9 | 97.701 | 4.11 GB (5.19%) | | 1 | 1024 | 1024 | 3044.35 | 87.7323 | 4.41 GB (5.57%) | | 1 | 2048 | 2048 | 2715.11 | 89.4709 | 5.57 GB (7.04%) | The speed and throughput of fused and unfused modules were also tested with the optimum-benchmark library.
forward peak memory/batch size generate throughput/batch size
Check copies Since the Transformers library is very opinionated with respect to model code, and each model should fully be implemented in a single file without relying on other models, we have added a mechanism that checks whether a copy of the code of a layer of a given model stays consistent with the original. This way, when there is a bug fix, we can see all other impacted models and choose to trickle down the modification or break the copy.
If a file is a full copy of another file, you should register it in the constant FULL_COPIES of utils/check_copies.py. This mechanism relies on comments of the form # Copied from xxx. The xxx should contain the whole path to the class of function which is being copied below. For instance, RobertaSelfOutput is a direct copy of the BertSelfOutput class, so you can see here it has a comment: Copied from transformers.models.bert.modeling_bert.BertSelfOutput
Copied from transformers.models.bert.modeling_bert.BertSelfOutput Note that instead of applying this to a whole class, you can apply it to the relevant methods that are copied from. For instance here you can see how RobertaPreTrainedModel._init_weights is copied from the same method in BertPreTrainedModel with the comment: Copied from transformers.models.bert.modeling_bert.BertPreTrainedModel._init_weights
Sometimes the copy is exactly the same except for names: for instance in RobertaAttention, we use RobertaSelfAttention insted of BertSelfAttention but other than that, the code is exactly the same. This is why # Copied from supports simple string replacements with the following syntax: Copied from xxx with foo->bar. This means the code is copied with all instances of foo being replaced by bar. You can see how it used here in RobertaAttention with the comment:
Copied from transformers.models.bert.modeling_bert.BertAttention with Bert->Roberta Note that there shouldn't be any spaces around the arrow (unless that space is part of the pattern to replace of course). You can add several patterns separated by a comma. For instance here CamemberForMaskedLM is a direct copy of RobertaForMaskedLM with two replacements: Roberta to Camembert and ROBERTA to CAMEMBERT. You can see here this is done with the comment:
Copied from transformers.models.roberta.modeling_roberta.RobertaForMaskedLM with Roberta->Camembert, ROBERTA->CAMEMBERT If the order matters (because one of the replacements might conflict with a previous one), the replacements are executed from left to right. If the replacements change the formatting (if you replace a short name by a very long name for instance), the copy is checked after applying the auto-formatter.
If the replacements change the formatting (if you replace a short name by a very long name for instance), the copy is checked after applying the auto-formatter. Another way when the patterns are just different casings of the same replacement (with an uppercased and a lowercased variants) is just to add the option all-casing. Here is an example in MobileBertForSequenceClassification with the comment:
Copied from transformers.models.bert.modeling_bert.BertForSequenceClassification with Bert->MobileBert all-casing In this case, the code is copied from BertForSequenceClassification by replacing: - Bert by MobileBert (for instance when using MobileBertModel in the init) - bert by mobilebert (for instance when defining self.mobilebert) - BERT by MOBILEBERT (in the constant MOBILEBERT_INPUTS_DOCSTRING)
Export to TFLite TensorFlow Lite is a lightweight framework for deploying machine learning models on resource-constrained devices, such as mobile phones, embedded systems, and Internet of Things (IoT) devices. TFLite is designed to optimize and run models efficiently on these devices with limited computational power, memory, and power consumption. A TensorFlow Lite model is represented in a special efficient portable format identified by the .tflite file extension. 🤗 Optimum offers functionality to export 🤗 Transformers models to TFLite through the exporters.tflite module. For the list of supported model architectures, please refer to 🤗 Optimum documentation. To export a model to TFLite, install the required dependencies:
pip install optimum[exporters-tf] To check out all available arguments, refer to the 🤗 Optimum docs, or view help in command line: optimum-cli export tflite --help To export a model's checkpoint from the 🤗 Hub, for example, google-bert/bert-base-uncased, run the following command: optimum-cli export tflite --model google-bert/bert-base-uncased --sequence_length 128 bert_tflite/ You should see the logs indicating progress and showing where the resulting model.tflite is saved, like this:
Validating TFLite model -[✓] TFLite model output names match reference model (logits) - Validating TFLite Model output "logits": -[✓] (1, 128, 30522) matches (1, 128, 30522) -[x] values not close enough, max diff: 5.817413330078125e-05 (atol: 1e-05) The TensorFlow Lite export succeeded with the warning: The maximum absolute difference between the output of the reference model and the TFLite exported model is not within the set tolerance 1e-05: - logits: max diff = 5.817413330078125e-05. The exported model was saved at: bert_tflite The example above illustrates exporting a checkpoint from 🤗 Hub. When exporting a local model, first make sure that you saved both the model's weights and tokenizer files in the same directory (local_path). When using CLI, pass the local_path to the model argument instead of the checkpoint name on 🤗 Hub.
Optimize inference using torch.compile() This guide aims to provide a benchmark on the inference speed-ups introduced with torch.compile() for computer vision models in 🤗 Transformers. Benefits of torch.compile Depending on the model and the GPU, torch.compile() yields up to 30% speed-up during inference. To use torch.compile(), simply install any version of torch above 2.0. Compiling a model takes time, so it's useful if you are compiling the model only once instead of every time you infer. To compile any computer vision model of your choice, call torch.compile() on the model as shown below:
from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained(MODEL_ID).to("cuda") + model = torch.compile(model)
compile() comes with multiple modes for compiling, which essentially differ in compilation time and inference overhead. max-autotune takes longer than reduce-overhead but results in faster inference. Default mode is fastest for compilation but is not as efficient compared to reduce-overhead for inference time. In this guide, we used the default mode. You can learn more about it here. We benchmarked torch.compile with different computer vision models, tasks, types of hardware, and batch sizes on torch version 2.0.1. Benchmarking code Below you can find the benchmarking code for each task. We warm up the GPU before inference and take the mean time of 300 inferences, using the same image each time. Image Classification with ViT thon import torch from PIL import Image import requests import numpy as np from transformers import AutoImageProcessor, AutoModelForImageClassification url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") model = AutoModelForImageClassification.from_pretrained("google/vit-base-patch16-224").to("cuda") model = torch.compile(model) processed_input = processor(image, return_tensors='pt').to(device="cuda") with torch.no_grad(): _ = model(**processed_input)
Object Detection with DETR thon from transformers import AutoImageProcessor, AutoModelForObjectDetection processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50") model = AutoModelForObjectDetection.from_pretrained("facebook/detr-resnet-50").to("cuda") model = torch.compile(model) texts = ["a photo of a cat", "a photo of a dog"] inputs = processor(text=texts, images=image, return_tensors="pt").to("cuda") with torch.no_grad(): _ = model(**inputs)
Image Segmentation with Segformer thon from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation processor = SegformerImageProcessor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512").to("cuda") model = torch.compile(model) seg_inputs = processor(images=image, return_tensors="pt").to("cuda") with torch.no_grad(): _ = model(**seg_inputs)
Below you can find the list of the models we benchmarked. Image Classification - google/vit-base-patch16-224 - microsoft/beit-base-patch16-224-pt22k-ft22k - facebook/convnext-large-224 - microsoft/resnet-50 Image Segmentation - nvidia/segformer-b0-finetuned-ade-512-512 - facebook/mask2former-swin-tiny-coco-panoptic - facebook/maskformer-swin-base-ade - google/deeplabv3_mobilenet_v2_1.0_513 Object Detection - google/owlvit-base-patch32 - facebook/detr-resnet-101 - microsoft/conditional-detr-resnet-50 Below you can find visualization of inference durations with and without torch.compile() and percentage improvements for each model in different hardware and batch sizes.
Below you can find inference durations in milliseconds for each model with and without compile(). Note that OwlViT results in OOM in larger batch sizes. A100 (batch size: 1) | Task/Model | torch 2.0 - no compile | torch 2.0 - compile | |:---:|:---:|:---:| | Image Classification/ViT | 9.325 | 7.584 | | Image Segmentation/Segformer | 11.759 | 10.500 | | Object Detection/OwlViT | 24.978 | 18.420 | | Image Classification/BeiT | 11.282 | 8.448 | | Object Detection/DETR | 34.619 | 19.040 | | Image Classification/ConvNeXT | 10.410 | 10.208 | | Image Classification/ResNet | 6.531 | 4.124 | | Image Segmentation/Mask2former | 60.188 | 49.117 | | Image Segmentation/Maskformer | 75.764 | 59.487 | | Image Segmentation/MobileNet | 8.583 | 3.974 | | Object Detection/Resnet-101 | 36.276 | 18.197 | | Object Detection/Conditional-DETR | 31.219 | 17.993 | A100 (batch size: 4) | Task/Model | torch 2.0 - no compile | torch 2.0 - compile | |:---:|:---:|:---:| | Image Classification/ViT | 14.832 | 14.499 | | Image Segmentation/Segformer | 18.838 | 16.476 | | Image Classification/BeiT | 13.205 | 13.048 | | Object Detection/DETR | 48.657 | 32.418| | Image Classification/ConvNeXT | 22.940 | 21.631 | | Image Classification/ResNet | 6.657 | 4.268 | | Image Segmentation/Mask2former | 74.277 | 61.781 | | Image Segmentation/Maskformer | 180.700 | 159.116 | | Image Segmentation/MobileNet | 14.174 | 8.515 | | Object Detection/Resnet-101 | 68.101 | 44.998 | | Object Detection/Conditional-DETR | 56.470 | 35.552 | A100 (batch size: 16) | Task/Model | torch 2.0 - no compile | torch 2.0 - compile | |:---:|:---:|:---:| | Image Classification/ViT | 40.944 | 40.010 | | Image Segmentation/Segformer | 37.005 | 31.144 | | Image Classification/BeiT | 41.854 | 41.048 | | Object Detection/DETR | 164.382 | 161.902 | | Image Classification/ConvNeXT | 82.258 | 75.561 | | Image Classification/ResNet | 7.018 | 5.024 | | Image Segmentation/Mask2former | 178.945 | 154.814 | | Image Segmentation/Maskformer | 638.570 | 579.826 | | Image Segmentation/MobileNet | 51.693 | 30.310 | | Object Detection/Resnet-101 | 232.887 | 155.021 | | Object Detection/Conditional-DETR | 180.491 | 124.032 | V100 (batch size: 1) | Task/Model | torch 2.0 - no compile | torch 2.0 - compile | |:---:|:---:|:---:| | Image Classification/ViT | 10.495 | 6.00 | | Image Segmentation/Segformer | 13.321 | 5.862 | | Object Detection/OwlViT | 25.769 | 22.395 | | Image Classification/BeiT | 11.347 | 7.234 | | Object Detection/DETR | 33.951 | 19.388 | | Image Classification/ConvNeXT | 11.623 | 10.412 | | Image Classification/ResNet | 6.484 | 3.820 | | Image Segmentation/Mask2former | 64.640 | 49.873 | | Image Segmentation/Maskformer | 95.532 | 72.207 | | Image Segmentation/MobileNet | 9.217 | 4.753 | | Object Detection/Resnet-101 | 52.818 | 28.367 | | Object Detection/Conditional-DETR | 39.512 | 20.816 | V100 (batch size: 4) | Task/Model | torch 2.0 - no compile | torch 2.0 - compile | |:---:|:---:|:---:| | Image Classification/ViT | 15.181 | 14.501 | | Image Segmentation/Segformer | 16.787 | 16.188 | | Image Classification/BeiT | 15.171 | 14.753 | | Object Detection/DETR | 88.529 | 64.195 | | Image Classification/ConvNeXT | 29.574 | 27.085 | | Image Classification/ResNet | 6.109 | 4.731 | | Image Segmentation/Mask2former | 90.402 | 76.926 | | Image Segmentation/Maskformer | 234.261 | 205.456 | | Image Segmentation/MobileNet | 24.623 | 14.816 | | Object Detection/Resnet-101 | 134.672 | 101.304 | | Object Detection/Conditional-DETR | 97.464 | 69.739 | V100 (batch size: 16) | Task/Model | torch 2.0 - no compile | torch 2.0 - compile | |:---:|:---:|:---:| | Image Classification/ViT | 52.209 | 51.633 | | Image Segmentation/Segformer | 61.013 | 55.499 | | Image Classification/BeiT | 53.938 | 53.581 | | Object Detection/DETR | OOM | OOM | | Image Classification/ConvNeXT | 109.682 | 100.771 | | Image Classification/ResNet | 14.857 | 12.089 | | Image Segmentation/Mask2former | 249.605 | 222.801 | | Image Segmentation/Maskformer | 831.142 | 743.645 | | Image Segmentation/MobileNet | 93.129 | 55.365 | | Object Detection/Resnet-101 | 482.425 | 361.843 | | Object Detection/Conditional-DETR | 344.661 | 255.298 | T4 (batch size: 1) | Task/Model | torch 2.0 - no compile | torch 2.0 - compile | |:---:|:---:|:---:| | Image Classification/ViT | 16.520 | 15.786 | | Image Segmentation/Segformer | 16.116 | 14.205 | | Object Detection/OwlViT | 53.634 | 51.105 | | Image Classification/BeiT | 16.464 | 15.710 | | Object Detection/DETR | 73.100 | 53.99 | | Image Classification/ConvNeXT | 32.932 | 30.845 | | Image Classification/ResNet | 6.031 | 4.321 | | Image Segmentation/Mask2former | 79.192 | 66.815 | | Image Segmentation/Maskformer | 200.026 | 188.268 | | Image Segmentation/MobileNet | 18.908 | 11.997 | | Object Detection/Resnet-101 | 106.622 | 82.566 | | Object Detection/Conditional-DETR | 77.594 | 56.984 | T4 (batch size: 4) | Task/Model | torch 2.0 - no compile | torch 2.0 - compile | |:---:|:---:|:---:| | Image Classification/ViT | 43.653 | 43.626 | | Image Segmentation/Segformer | 45.327 | 42.445 | | Image Classification/BeiT | 52.007 | 51.354 | | Object Detection/DETR | 277.850 | 268.003 | | Image Classification/ConvNeXT | 119.259 | 105.580 | | Image Classification/ResNet | 13.039 | 11.388 | | Image Segmentation/Mask2former | 201.540 | 184.670 | | Image Segmentation/Maskformer | 764.052 | 711.280 | | Image Segmentation/MobileNet | 74.289 | 48.677 | | Object Detection/Resnet-101 | 421.859 | 357.614 | | Object Detection/Conditional-DETR | 289.002 | 226.945 | T4 (batch size: 16) | Task/Model | torch 2.0 - no compile | torch 2.0 - compile | |:---:|:---:|:---:| | Image Classification/ViT | 163.914 | 160.907 | | Image Segmentation/Segformer | 192.412 | 163.620 | | Image Classification/BeiT | 188.978 | 187.976 | | Object Detection/DETR | OOM | OOM | | Image Classification/ConvNeXT | 422.886 | 388.078 | | Image Classification/ResNet | 44.114 | 37.604 | | Image Segmentation/Mask2former | 756.337 | 695.291 | | Image Segmentation/Maskformer | 2842.940 | 2656.88 | | Image Segmentation/MobileNet | 299.003 | 201.942 | | Object Detection/Resnet-101 | 1619.505 | 1262.758 | | Object Detection/Conditional-DETR | 1137.513 | 897.390| PyTorch Nightly We also benchmarked on PyTorch nightly (2.1.0dev, find the wheel here) and observed improvement in latency both for uncompiled and compiled models. A100 | Task/Model | Batch Size | torch 2.0 - no compile | torch 2.0 - compile | |:---:|:---:|:---:|:---:| | Image Classification/BeiT | Unbatched | 12.462 | 6.954 | | Image Classification/BeiT | 4 | 14.109 | 12.851 | | Image Classification/BeiT | 16 | 42.179 | 42.147 | | Object Detection/DETR | Unbatched | 30.484 | 15.221 | | Object Detection/DETR | 4 | 46.816 | 30.942 | | Object Detection/DETR | 16 | 163.749 | 163.706 | T4 | Task/Model | Batch Size | torch 2.0 - no compile | torch 2.0 - compile | |:---:|:---:|:---:|:---:| | Image Classification/BeiT | Unbatched | 14.408 | 14.052 | | Image Classification/BeiT | 4 | 47.381 | 46.604 | | Image Classification/BeiT | 16 | 42.179 | 42.147 | | Object Detection/DETR | Unbatched | 68.382 | 53.481 | | Object Detection/DETR | 4 | 269.615 | 204.785 | | Object Detection/DETR | 16 | OOM | OOM | V100 | Task/Model | Batch Size | torch 2.0 - no compile | torch 2.0 - compile | |:---:|:---:|:---:|:---:| | Image Classification/BeiT | Unbatched | 13.477 | 7.926 | | Image Classification/BeiT | 4 | 15.103 | 14.378 | | Image Classification/BeiT | 16 | 52.517 | 51.691 | | Object Detection/DETR | Unbatched | 28.706 | 19.077 | | Object Detection/DETR | 4 | 88.402 | 62.949| | Object Detection/DETR | 16 | OOM | OOM | Reduce Overhead We benchmarked reduce-overhead compilation mode for A100 and T4 in Nightly. A100 | Task/Model | Batch Size | torch 2.0 - no compile | torch 2.0 - compile | |:---:|:---:|:---:|:---:| | Image Classification/ConvNeXT | Unbatched | 11.758 | 7.335 | | Image Classification/ConvNeXT | 4 | 23.171 | 21.490 | | Image Classification/ResNet | Unbatched | 7.435 | 3.801 | | Image Classification/ResNet | 4 | 7.261 | 2.187 | | Object Detection/Conditional-DETR | Unbatched | 32.823 | 11.627 | | Object Detection/Conditional-DETR | 4 | 50.622 | 33.831 | | Image Segmentation/MobileNet | Unbatched | 9.869 | 4.244 | | Image Segmentation/MobileNet | 4 | 14.385 | 7.946 | T4 | Task/Model | Batch Size | torch 2.0 - no compile | torch 2.0 - compile | |:---:|:---:|:---:|:---:| | Image Classification/ConvNeXT | Unbatched | 32.137 | 31.84 | | Image Classification/ConvNeXT | 4 | 120.944 | 110.209 | | Image Classification/ResNet | Unbatched | 9.761 | 7.698 | | Image Classification/ResNet | 4 | 15.215 | 13.871 | | Object Detection/Conditional-DETR | Unbatched | 72.150 | 57.660 | | Object Detection/Conditional-DETR | 4 | 301.494 | 247.543 | | Image Segmentation/MobileNet | Unbatched | 22.266 | 19.339 | | Image Segmentation/MobileNet | 4 | 78.311 | 50.983 |
PyTorch training on Apple silicon Previously, training models on a Mac was limited to the CPU only. With the release of PyTorch v1.12, you can take advantage of training models with Apple's silicon GPUs for significantly faster performance and training. This is powered in PyTorch by integrating Apple's Metal Performance Shaders (MPS) as a backend. The MPS backend implements PyTorch operations as custom Metal shaders and places these modules on a mps device.
Some PyTorch operations are not implemented in MPS yet and will throw an error. To avoid this, you should set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1 to use the CPU kernels instead (you'll still see a UserWarning). If you run into any other errors, please open an issue in the PyTorch repository because the [Trainer] only integrates the MPS backend. With the mps device set, you can:
If you run into any other errors, please open an issue in the PyTorch repository because the [Trainer] only integrates the MPS backend. With the mps device set, you can: train larger networks or batch sizes locally reduce data retrieval latency because the GPU's unified memory architecture allows direct access to the full memory store reduce costs because you don't need to train on cloud-based GPUs or add additional local GPUs
Get started by making sure you have PyTorch installed. MPS acceleration is supported on macOS 12.3+. pip install torch torchvision torchaudio [TrainingArguments] uses the mps device by default if it's available which means you don't need to explicitly set the device. For example, you can run the run_glue.py script with the MPS backend automatically enabled without making any changes.
export TASK_NAME=mrpc python examples/pytorch/text-classification/run_glue.py \ --model_name_or_path google-bert/bert-base-cased \ --task_name $TASK_NAME \ - --use_mps_device \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ \ --overwrite_output_dir
Backends for distributed setups like gloo and nccl are not supported by the mps device which means you can only train on a single GPU with the MPS backend. You can learn more about the MPS backend in the Introducing Accelerated PyTorch Training on Mac blog post.
Before you begin, make sure you have all the necessary libraries installed: pip install transformers datasets evaluate sacrebleu We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: from huggingface_hub import notebook_login notebook_login() Load OPUS Books dataset Start by loading the English-French subset of the OPUS Books dataset from the 🤗 Datasets library:
from huggingface_hub import notebook_login notebook_login() Load OPUS Books dataset Start by loading the English-French subset of the OPUS Books dataset from the 🤗 Datasets library: from datasets import load_dataset books = load_dataset("opus_books", "en-fr") Split the dataset into a train and test set with the [~datasets.Dataset.train_test_split] method: books = books["train"].train_test_split(test_size=0.2) Then take a look at an example:
Split the dataset into a train and test set with the [~datasets.Dataset.train_test_split] method: books = books["train"].train_test_split(test_size=0.2) Then take a look at an example: books["train"][0] {'id': '90560', 'translation': {'en': 'But this lofty plateau measured only a few fathoms, and soon we reentered Our Element.', 'fr': 'Mais ce plateau élevé ne mesurait que quelques toises, et bientôt nous fûmes rentrés dans notre élément.'}}
translation: an English and French translation of the text. Preprocess The next step is to load a T5 tokenizer to process the English-French language pairs: from transformers import AutoTokenizer checkpoint = "google-t5/t5-small" tokenizer = AutoTokenizer.from_pretrained(checkpoint) The preprocessing function you want to create needs to:
The preprocessing function you want to create needs to: Prefix the input with a prompt so T5 knows this is a translation task. Some models capable of multiple NLP tasks require prompting for specific tasks. Tokenize the input (English) and target (French) separately because you can't tokenize French text with a tokenizer pretrained on an English vocabulary. Truncate sequences to be no longer than the maximum length set by the max_length parameter.
source_lang = "en" target_lang = "fr" prefix = "translate English to French: " def preprocess_function(examples): inputs = [prefix + example[source_lang] for example in examples["translation"]] targets = [example[target_lang] for example in examples["translation"]] model_inputs = tokenizer(inputs, text_target=targets, max_length=128, truncation=True) return model_inputs
To apply the preprocessing function over the entire dataset, use 🤗 Datasets [~datasets.Dataset.map] method. You can speed up the map function by setting batched=True to process multiple elements of the dataset at once: tokenized_books = books.map(preprocess_function, batched=True) Now create a batch of examples using [DataCollatorForSeq2Seq]. It's more efficient to dynamically pad the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.
from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf")
from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf") Evaluate Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 Evaluate library. For this task, load the SacreBLEU metric (see the 🤗 Evaluate quick tour to learn more about how to load and compute a metric): import evaluate metric = evaluate.load("sacrebleu")
import evaluate metric = evaluate.load("sacrebleu") Then create a function that passes your predictions and labels to [~evaluate.EvaluationModule.compute] to calculate the SacreBLEU score: import numpy as np def postprocess_text(preds, labels): preds = [pred.strip() for pred in preds] labels = [[label.strip()] for label in labels] return preds, labels
import numpy as np def postprocess_text(preds, labels): preds = [pred.strip() for pred in preds] labels = [[label.strip()] for label in labels] return preds, labels def compute_metrics(eval_preds): preds, labels = eval_preds if isinstance(preds, tuple): preds = preds[0] decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) result = metric.compute(predictions=decoded_preds, references=decoded_labels) result = {"bleu": result["score"]} prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds] result["gen_len"] = np.mean(prediction_lens) result = {k: round(v, 4) for k, v in result.items()} return result
Your compute_metrics function is ready to go now, and you'll return to it when you setup your training. Train If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here! You're ready to start training your model now! Load T5 with [AutoModelForSeq2SeqLM]: from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) At this point, only three steps remain:
Define your training hyperparameters in [Seq2SeqTrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer] will evaluate the SacreBLEU metric and save the training checkpoint. Pass the training arguments to [Seq2SeqTrainer] along with the model, dataset, tokenizer, data collator, and compute_metrics function. Call [~Trainer.train] to finetune your model.
training_args = Seq2SeqTrainingArguments( output_dir="my_awesome_opus_books_model", evaluation_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, weight_decay=0.01, save_total_limit=3, num_train_epochs=2, predict_with_generate=True, fp16=True, push_to_hub=True, ) trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=tokenized_books["train"], eval_dataset=tokenized_books["test"], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) trainer.train()
Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model: trainer.push_to_hub() If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial here! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: from transformers import AdamWeightDecay optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) Then you can load T5 with [TFAutoModelForSeq2SeqLM]: from transformers import TFAutoModelForSeq2SeqLM model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint)
Then you can load T5 with [TFAutoModelForSeq2SeqLM]: from transformers import TFAutoModelForSeq2SeqLM model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint) Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]: tf_train_set = model.prepare_tf_dataset( tokenized_books["train"], shuffle=True, batch_size=16, collate_fn=data_collator, ) tf_test_set = model.prepare_tf_dataset( tokenized_books["test"], shuffle=False, batch_size=16, collate_fn=data_collator, )
Configure the model for training with compile. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: import tensorflow as tf model.compile(optimizer=optimizer) # No loss argument!
import tensorflow as tf model.compile(optimizer=optimizer) # No loss argument! The last two things to setup before you start training is to compute the SacreBLEU metric from the predictions, and provide a way to push your model to the Hub. Both are done by using Keras callbacks. Pass your compute_metrics function to [~transformers.KerasMetricCallback]:
from transformers.keras_callbacks import KerasMetricCallback metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) Specify where to push your model and tokenizer in the [~transformers.PushToHubCallback]: from transformers.keras_callbacks import PushToHubCallback push_to_hub_callback = PushToHubCallback( output_dir="my_awesome_opus_books_model", tokenizer=tokenizer, ) Then bundle your callbacks together:
Then bundle your callbacks together: callbacks = [metric_callback, push_to_hub_callback] Finally, you're ready to start training your model! Call fit with your training and validation datasets, the number of epochs, and your callbacks to finetune the model: model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for translation, take a look at the corresponding PyTorch notebook or TensorFlow notebook.
For a more in-depth example of how to finetune a model for translation, take a look at the corresponding PyTorch notebook or TensorFlow notebook. Inference Great, now that you've finetuned a model, you can use it for inference! Come up with some text you'd like to translate to another language. For T5, you need to prefix your input depending on the task you're working on. For translation from English to French, you should prefix your input as shown below:
text = "translate English to French: Legumes share resources with nitrogen-fixing bacteria." The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for translation with your model, and pass your text to it: from transformers import pipeline translator = pipeline("translation", model="my_awesome_opus_books_model") translator(text) [{'translation_text': 'Legumes partagent des ressources avec des bactéries azotantes.'}]
You can also manually replicate the results of the pipeline if you'd like: Tokenize the text and return the input_ids as PyTorch tensors: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model") inputs = tokenizer(text, return_tensors="pt").input_ids
from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model") inputs = tokenizer(text, return_tensors="pt").input_ids Use the [~transformers.generation_utils.GenerationMixin.generate] method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the Text Generation API.
from transformers import AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model") outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95) Decode the generated token ids back into text: tokenizer.decode(outputs[0], skip_special_tokens=True) 'Les lignées partagent des ressources avec des bactéries enfixant l'azote.' `` </pt> <tf> Tokenize the text and return theinput_ids` as TensorFlow tensors:
from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model") inputs = tokenizer(text, return_tensors="tf").input_ids Use the [~transformers.generation_tf_utils.TFGenerationMixin.generate] method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the Text Generation API.
from transformers import TFAutoModelForSeq2SeqLM model = TFAutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model") outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95) Decode the generated token ids back into text: tokenizer.decode(outputs[0], skip_special_tokens=True) 'Les lugumes partagent les ressources avec des bactéries fixatrices d'azote.'
LayoutLMv2 solves the document question-answering task by adding a question-answering head on top of the final hidden states of the tokens, to predict the positions of the start and end tokens of the answer. In other words, the problem is treated as extractive question answering: given the context, extract which piece of information answers the question. The context comes from the output of an OCR engine, here it is Google's Tesseract. Before you begin, make sure you have all the necessary libraries installed. LayoutLMv2 depends on detectron2, torchvision and tesseract.
pip install -q transformers datasets pip install 'git+https://github.com/facebookresearch/detectron2.git' pip install torchvision sudo apt install tesseract-ocr pip install -q pytesseract Once you have installed all of the dependencies, restart your runtime. We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the 🤗 Hub. When prompted, enter your token to log in: from huggingface_hub import notebook_login notebook_login()
from huggingface_hub import notebook_login notebook_login() Let's define some global variables. model_checkpoint = "microsoft/layoutlmv2-base-uncased" batch_size = 4 Load the data In this guide we use a small sample of preprocessed DocVQA that you can find on 🤗 Hub. If you'd like to use the full DocVQA dataset, you can register and download it on DocVQA homepage. If you do so, to proceed with this guide check out how to load files into a 🤗 dataset.
from datasets import load_dataset dataset = load_dataset("nielsr/docvqa_1200_examples") dataset DatasetDict({ train: Dataset({ features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'], num_rows: 1000 }) test: Dataset({ features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'], num_rows: 200 }) })
As you can see, the dataset is split into train and test sets already. Take a look at a random example to familiarize yourself with the features. dataset["train"].features
Here's what the individual fields represent: * id: the example's id * image: a PIL.Image.Image object containing the document image * query: the question string - natural language asked question, in several languages * answers: a list of correct answers provided by human annotators * words and bounding_boxes: the results of OCR, which we will not use here * answer: an answer matched by a different model which we will not use here Let's leave only English questions, and drop the answer feature which appears to contain predictions by another model. We'll also take the first of the answers from the set provided by the annotators. Alternatively, you can randomly sample it.
updated_dataset = dataset.map(lambda example: {"question": example["query"]["en"]}, remove_columns=["query"]) updated_dataset = updated_dataset.map( lambda example: {"answer": example["answers"][0]}, remove_columns=["answer", "answers"] )
Note that the LayoutLMv2 checkpoint that we use in this guide has been trained with max_position_embeddings = 512 (you can find this information in the checkpoint's config.json file). We can truncate the examples but to avoid the situation where the answer might be at the end of a large document and end up truncated, here we'll remove the few examples where the embedding is likely to end up longer than 512. If most of the documents in your dataset are long, you can implement a sliding window strategy - check out this notebook for details.
updated_dataset = updated_dataset.filter(lambda x: len(x["words"]) + len(x["question"].split()) < 512)
At this point let's also remove the OCR features from this dataset. These are a result of OCR for fine-tuning a different model. They would still require some processing if we wanted to use them, as they do not match the input requirements of the model we use in this guide. Instead, we can use the [LayoutLMv2Processor] on the original data for both OCR and tokenization. This way we'll get the inputs that match model's expected input. If you want to process images manually, check out the LayoutLMv2 model documentation to learn what input format the model expects.
updated_dataset = updated_dataset.remove_columns("words") updated_dataset = updated_dataset.remove_columns("bounding_boxes") Finally, the data exploration won't be complete if we don't peek at an image example. updated_dataset["train"][11]["image"]
Finally, the data exploration won't be complete if we don't peek at an image example. updated_dataset["train"][11]["image"] Preprocess the data The Document Question Answering task is a multimodal task, and you need to make sure that the inputs from each modality are preprocessed according to the model's expectations. Let's start by loading the [LayoutLMv2Processor], which internally combines an image processor that can handle image data and a tokenizer that can encode text data.
from transformers import AutoProcessor processor = AutoProcessor.from_pretrained(model_checkpoint)
Preprocessing document images First, let's prepare the document images for the model with the help of the image_processor from the processor. By default, image processor resizes the images to 224x224, makes sure they have the correct order of color channels, applies OCR with tesseract to get words and normalized bounding boxes. In this tutorial, all of these defaults are exactly what we need. Write a function that applies the default image processing to a batch of images and returns the results of OCR.
image_processor = processor.image_processor def get_ocr_words_and_boxes(examples): images = [image.convert("RGB") for image in examples["image"]] encoded_inputs = image_processor(images) examples["image"] = encoded_inputs.pixel_values examples["words"] = encoded_inputs.words examples["boxes"] = encoded_inputs.boxes return examples To apply this preprocessing to the entire dataset in a fast way, use [~datasets.Dataset.map].
To apply this preprocessing to the entire dataset in a fast way, use [~datasets.Dataset.map]. dataset_with_ocr = updated_dataset.map(get_ocr_words_and_boxes, batched=True, batch_size=2)
dataset_with_ocr = updated_dataset.map(get_ocr_words_and_boxes, batched=True, batch_size=2) Preprocessing text data Once we have applied OCR to the images, we need to encode the text part of the dataset to prepare it for the model. This involves converting the words and boxes that we got in the previous step to token-level input_ids, attention_mask, token_type_ids and bbox. For preprocessing text, we'll need the tokenizer from the processor. tokenizer = processor.tokenizer
On top of the preprocessing mentioned above, we also need to add the labels for the model. For xxxForQuestionAnswering models in 🤗 Transformers, the labels consist of the start_positions and end_positions, indicating which token is at the start and which token is at the end of the answer. Let's start with that. Define a helper function that can find a sublist (the answer split into words) in a larger list (the words list). This function will take two lists as input, words_list and answer_list. It will then iterate over the words_list and check if the current word in the words_list (words_list[i]) is equal to the first word of answer_list (answer_list[0]) and if the sublist of words_list starting from the current word and of the same length as answer_list is equal to answer_list. If this condition is true, it means that a match has been found, and the function will record the match, its starting index (idx), and its ending index (idx + len(answer_list) - 1). If more than one match was found, the function will return only the first one. If no match is found, the function returns (None, 0, and 0).
def subfinder(words_list, answer_list): matches = [] start_indices = [] end_indices = [] for idx, i in enumerate(range(len(words_list))): if words_list[i] == answer_list[0] and words_list[i : i + len(answer_list)] == answer_list: matches.append(answer_list) start_indices.append(idx) end_indices.append(idx + len(answer_list) - 1) if matches: return matches[0], start_indices[0], end_indices[0] else: return None, 0, 0
To illustrate how this function finds the position of the answer, let's use it on an example:
example = dataset_with_ocr["train"][1] words = [word.lower() for word in example["words"]] match, word_idx_start, word_idx_end = subfinder(words, example["answer"].lower().split()) print("Question: ", example["question"]) print("Words:", words) print("Answer: ", example["answer"]) print("start_index", word_idx_start) print("end_index", word_idx_end) Question: Who is in cc in this letter? Words: ['wie', 'baw', 'brown', '&', 'williamson', 'tobacco', 'corporation', 'research', '&', 'development', 'internal', 'correspondence', 'to:', 'r.', 'h.', 'honeycutt', 'ce:', 't.f.', 'riehl', 'from:', '.', 'c.j.', 'cook', 'date:', 'may', '8,', '1995', 'subject:', 'review', 'of', 'existing', 'brainstorming', 'ideas/483', 'the', 'major', 'function', 'of', 'the', 'product', 'innovation', 'graup', 'is', 'to', 'develop', 'marketable', 'nove!', 'products', 'that', 'would', 'be', 'profitable', 'to', 'manufacture', 'and', 'sell.', 'novel', 'is', 'defined', 'as:', 'of', 'a', 'new', 'kind,', 'or', 'different', 'from', 'anything', 'seen', 'or', 'known', 'before.', 'innovation', 'is', 'defined', 'as:', 'something', 'new', 'or', 'different', 'introduced;', 'act', 'of', 'innovating;', 'introduction', 'of', 'new', 'things', 'or', 'methods.', 'the', 'products', 'may', 'incorporate', 'the', 'latest', 'technologies,', 'materials', 'and', 'know-how', 'available', 'to', 'give', 'then', 'a', 'unique', 'taste', 'or', 'look.', 'the', 'first', 'task', 'of', 'the', 'product', 'innovation', 'group', 'was', 'to', 'assemble,', 'review', 'and', 'categorize', 'a', 'list', 'of', 'existing', 'brainstorming', 'ideas.', 'ideas', 'were', 'grouped', 'into', 'two', 'major', 'categories', 'labeled', 'appearance', 'and', 'taste/aroma.', 'these', 'categories', 'are', 'used', 'for', 'novel', 'products', 'that', 'may', 'differ', 'from', 'a', 'visual', 'and/or', 'taste/aroma', 'point', 'of', 'view', 'compared', 'to', 'canventional', 'cigarettes.', 'other', 'categories', 'include', 'a', 'combination', 'of', 'the', 'above,', 'filters,', 'packaging', 'and', 'brand', 'extensions.', 'appearance', 'this', 'category', 'is', 'used', 'for', 'novel', 'cigarette', 'constructions', 'that', 'yield', 'visually', 'different', 'products', 'with', 'minimal', 'changes', 'in', 'smoke', 'chemistry', 'two', 'cigarettes', 'in', 'cne.', 'emulti-plug', 'te', 'build', 'yaur', 'awn', 'cigarette.', 'eswitchable', 'menthol', 'or', 'non', 'menthol', 'cigarette.', 'cigarettes', 'with', 'interspaced', 'perforations', 'to', 'enable', 'smoker', 'to', 'separate', 'unburned', 'section', 'for', 'future', 'smoking.', '«short', 'cigarette,', 'tobacco', 'section', '30', 'mm.', '«extremely', 'fast', 'buming', 'cigarette.', '«novel', 'cigarette', 'constructions', 'that', 'permit', 'a', 'significant', 'reduction', 'iretobacco', 'weight', 'while', 'maintaining', 'smoking', 'mechanics', 'and', 'visual', 'characteristics.', 'higher', 'basis', 'weight', 'paper:', 'potential', 'reduction', 'in', 'tobacco', 'weight.', '«more', 'rigid', 'tobacco', 'column;', 'stiffing', 'agent', 'for', 'tobacco;', 'e.g.', 'starch', 'colored', 'tow', 'and', 'cigarette', 'papers;', 'seasonal', 'promotions,', 'e.g.', 'pastel', 'colored', 'cigarettes', 'for', 'easter', 'or', 'in', 'an', 'ebony', 'and', 'ivory', 'brand', 'containing', 'a', 'mixture', 'of', 'all', 'black', '(black', 'paper', 'and', 'tow)', 'and', 'ail', 'white', 'cigarettes.', '499150498'] Answer: T.F. Riehl start_index 17 end_index 18
Once examples are encoded, however, they will look like this: encoding = tokenizer(example["question"], example["words"], example["boxes"]) tokenizer.decode(encoding["input_ids"]) [CLS] who is in cc in this letter? [SEP] wie baw brown & williamson tobacco corporation research & development
We'll need to find the position of the answer in the encoded input. * token_type_ids tells us which tokens are part of the question, and which ones are part of the document's words. * tokenizer.cls_token_id will help find the special token at the beginning of the input. * word_ids will help match the answer found in the original words to the same answer in the full encoded input and determine the start/end position of the answer in the encoded input. With that in mind, let's create a function to encode a batch of examples in the dataset:
def encode_dataset(examples, max_length=512): questions = examples["question"] words = examples["words"] boxes = examples["boxes"] answers = examples["answer"]
# encode the batch of examples and initialize the start_positions and end_positions encoding = tokenizer(questions, words, boxes, max_length=max_length, padding="max_length", truncation=True) start_positions = [] end_positions = [] # loop through the examples in the batch for i in range(len(questions)): cls_index = encoding["input_ids"][i].index(tokenizer.cls_token_id) # find the position of the answer in example's words words_example = [word.lower() for word in words[i]] answer = answers[i] match, word_idx_start, word_idx_end = subfinder(words_example, answer.lower().split()) if match: # if match is found, use token_type_ids to find where words start in the encoding token_type_ids = encoding["token_type_ids"][i] token_start_index = 0 while token_type_ids[token_start_index] != 1: token_start_index += 1 token_end_index = len(encoding["input_ids"][i]) - 1 while token_type_ids[token_end_index] != 1: token_end_index -= 1 word_ids = encoding.word_ids(i)[token_start_index : token_end_index + 1] start_position = cls_index end_position = cls_index # loop over word_ids and increase token_start_index until it matches the answer position in words # once it matches, save the token_start_index as the start_position of the answer in the encoding for id in word_ids: if id == word_idx_start: start_position = token_start_index else: token_start_index += 1 # similarly loop over word_ids starting from the end to find the end_position of the answer for id in word_ids[::-1]: if id == word_idx_end: end_position = token_end_index else: token_end_index -= 1 start_positions.append(start_position) end_positions.append(end_position) else: start_positions.append(cls_index) end_positions.append(cls_index) encoding["image"] = examples["image"] encoding["start_positions"] = start_positions encoding["end_positions"] = end_positions return encoding