text
stringlengths
2
11.8k
Now that we have this preprocessing function, we can encode the entire dataset: encoded_train_dataset = dataset_with_ocr["train"].map( encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["train"].column_names ) encoded_test_dataset = dataset_with_ocr["test"].map( encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["test"].column_names ) Let's check what the features of the encoded dataset look like:
encoded_train_dataset.features {'image': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='uint8', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'token_type_ids': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'bbox': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'start_positions': Value(dtype='int64', id=None), 'end_positions': Value(dtype='int64', id=None)}
Evaluation Evaluation for document question answering requires a significant amount of postprocessing. To avoid taking up too much of your time, this guide skips the evaluation step. The [Trainer] still calculates the evaluation loss during training so you're not completely in the dark about your model's performance. Extractive question answering is typically evaluated using F1/exact match. If you'd like to implement it yourself, check out the Question Answering chapter of the Hugging Face course for inspiration. Train Congratulations! You've successfully navigated the toughest part of this guide and now you are ready to train your own model. Training involves the following steps: * Load the model with [AutoModelForDocumentQuestionAnswering] using the same checkpoint as in the preprocessing. * Define your training hyperparameters in [TrainingArguments]. * Define a function to batch examples together, here the [DefaultDataCollator] will do just fine * Pass the training arguments to [Trainer] along with the model, dataset, and data collator. * Call [~Trainer.train] to finetune your model.
from transformers import AutoModelForDocumentQuestionAnswering model = AutoModelForDocumentQuestionAnswering.from_pretrained(model_checkpoint)
In the [TrainingArguments] use output_dir to specify where to save your model, and configure hyperparameters as you see fit. If you wish to share your model with the community, set push_to_hub to True (you must be signed in to Hugging Face to upload your model). In this case the output_dir will also be the name of the repo where your model checkpoint will be pushed.
from transformers import TrainingArguments REPLACE THIS WITH YOUR REPO ID repo_id = "MariaK/layoutlmv2-base-uncased_finetuned_docvqa" training_args = TrainingArguments( output_dir=repo_id, per_device_train_batch_size=4, num_train_epochs=20, save_steps=200, logging_steps=50, evaluation_strategy="steps", learning_rate=5e-5, save_total_limit=2, remove_unused_columns=False, push_to_hub=True, ) Define a simple data collator to batch examples together.
Define a simple data collator to batch examples together. from transformers import DefaultDataCollator data_collator = DefaultDataCollator() Finally, bring everything together, and call [~Trainer.train]: from transformers import Trainer trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=encoded_train_dataset, eval_dataset=encoded_test_dataset, tokenizer=processor, ) trainer.train()
To add the final model to 🤗 Hub, create a model card and call push_to_hub: trainer.create_model_card() trainer.push_to_hub() Inference Now that you have finetuned a LayoutLMv2 model, and uploaded it to the 🤗 Hub, you can use it for inference. The simplest way to try out your finetuned model for inference is to use it in a [Pipeline]. Let's take an example:
example = dataset["test"][2] question = example["query"]["en"] image = example["image"] print(question) print(example["answers"]) 'Who is ‘presiding’ TRRF GENERAL SESSION (PART 1)?' ['TRRF Vice President', 'lee a. waller'] Next, instantiate a pipeline for document question answering with your model, and pass the image + question combination to it.
Next, instantiate a pipeline for document question answering with your model, and pass the image + question combination to it. from transformers import pipeline qa_pipeline = pipeline("document-question-answering", model="MariaK/layoutlmv2-base-uncased_finetuned_docvqa") qa_pipeline(image, question) [{'score': 0.9949808120727539, 'answer': 'Lee A. Waller', 'start': 55, 'end': 57}]
You can also manually replicate the results of the pipeline if you'd like: 1. Take an image and a question, prepare them for the model using the processor from your model. 2. Forward the result or preprocessing through the model. 3. The model returns start_logits and end_logits, which indicate which token is at the start of the answer and which token is at the end of the answer. Both have shape (batch_size, sequence_length). 4. Take an argmax on the last dimension of both the start_logits and end_logits to get the predicted start_idx and end_idx. 5. Decode the answer with the tokenizer.
import torch from transformers import AutoProcessor from transformers import AutoModelForDocumentQuestionAnswering processor = AutoProcessor.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa") model = AutoModelForDocumentQuestionAnswering.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa") with torch.no_grad(): encoding = processor(image.convert("RGB"), question, return_tensors="pt") outputs = model(**encoding) start_logits = outputs.start_logits end_logits = outputs.end_logits predicted_start_idx = start_logits.argmax(-1).item() predicted_end_idx = end_logits.argmax(-1).item() processor.tokenizer.decode(encoding.input_ids.squeeze()[predicted_start_idx : predicted_end_idx + 1]) 'lee a. waller'
Image tasks with IDEFICS [[open-in-colab]] While individual tasks can be tackled by fine-tuning specialized models, an alternative approach that has recently emerged and gained popularity is to use large models for a diverse set of tasks without fine-tuning. For instance, large language models can handle such NLP tasks as summarization, translation, classification, and more. This approach is no longer limited to a single modality, such as text, and in this guide, we will illustrate how you can solve image-text tasks with a large multimodal model called IDEFICS. IDEFICS is an open-access vision and language model based on Flamingo, a state-of-the-art visual language model initially developed by DeepMind. The model accepts arbitrary sequences of image and text inputs and generates coherent text as output. It can answer questions about images, describe visual content, create stories grounded in multiple images, and so on. IDEFICS comes in two variants - 80 billion parameters and 9 billion parameters, both of which are available on the 🤗 Hub. For each variant, you can also find fine-tuned instructed versions of the model adapted for conversational use cases. This model is exceptionally versatile and can be used for a wide range of image and multimodal tasks. However, being a large model means it requires significant computational resources and infrastructure. It is up to you to decide whether this approach suits your use case better than fine-tuning specialized models for each individual task. In this guide, you'll learn how to: - Load IDEFICS and load the quantized version of the model - Use IDEFICS for: - Image captioning - Prompted image captioning - Few-shot prompting - Visual question answering - Image classification - Image-guided text generation - Run inference in batch mode - Run IDEFICS instruct for conversational use Before you begin, make sure you have all the necessary libraries installed.
pip install -q bitsandbytes sentencepiece accelerate transformers To run the following examples with a non-quantized version of the model checkpoint you will need at least 20GB of GPU memory. Loading the model Let's start by loading the model's 9 billion parameters checkpoint: checkpoint = "HuggingFaceM4/idefics-9b"
Loading the model Let's start by loading the model's 9 billion parameters checkpoint: checkpoint = "HuggingFaceM4/idefics-9b" Just like for other Transformers models, you need to load a processor and the model itself from the checkpoint. The IDEFICS processor wraps a [LlamaTokenizer] and IDEFICS image processor into a single processor to take care of preparing text and image inputs for the model.
import torch from transformers import IdeficsForVisionText2Text, AutoProcessor processor = AutoProcessor.from_pretrained(checkpoint) model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, device_map="auto")
Setting device_map to "auto" will automatically determine how to load and store the model weights in the most optimized manner given existing devices. Quantized model If high-memory GPU availability is an issue, you can load the quantized version of the model. To load the model and the processor in 4bit precision, pass a BitsAndBytesConfig to the from_pretrained method and the model will be compressed on the fly while loading.
import torch from transformers import IdeficsForVisionText2Text, AutoProcessor, BitsAndBytesConfig quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16, ) processor = AutoProcessor.from_pretrained(checkpoint) model = IdeficsForVisionText2Text.from_pretrained( checkpoint, quantization_config=quantization_config, device_map="auto" )
Now that you have the model loaded in one of the suggested ways, let's move on to exploring tasks that you can use IDEFICS for. Image captioning Image captioning is the task of predicting a caption for a given image. A common application is to aid visually impaired people navigate through different situations, for instance, explore image content online. To illustrate the task, get an image to be captioned, e.g.:
Photo by Hendo Wang. IDEFICS accepts text and image prompts. However, to caption an image, you do not have to provide a text prompt to the model, only the preprocessed input image. Without a text prompt, the model will start generating text from the BOS (beginning-of-sequence) token thus creating a caption. As image input to the model, you can use either an image object (PIL.Image) or a url from which the image can be retrieved.
prompt = [ "https://images.unsplash.com/photo-1583160247711-2191776b4b91?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3542&q=80", ] inputs = processor(prompt, return_tensors="pt").to("cuda") bad_words_ids = processor.tokenizer(["", ""], add_special_tokens=False).input_ids generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) print(generated_text[0]) A puppy in a flower bed
It is a good idea to include the bad_words_ids in the call to generate to avoid errors arising when increasing the max_new_tokens: the model will want to generate a new <image> or <fake_token_around_image> token when there is no image being generated by the model. You can set it on-the-fly as in this guide, or store in the GenerationConfig as described in the Text generation strategies guide.
Prompted image captioning You can extend image captioning by providing a text prompt, which the model will continue given the image. Let's take another image to illustrate: Photo by Denys Nevozhai. Textual and image prompts can be passed to the model's processor as a single list to create appropriate inputs.
prompt = [ "https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80", "This is an image of ", ] inputs = processor(prompt, return_tensors="pt").to("cuda") bad_words_ids = processor.tokenizer(["", ""], add_special_tokens=False).input_ids generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) print(generated_text[0]) This is an image of the Eiffel Tower in Paris, France.
Few-shot prompting While IDEFICS demonstrates great zero-shot results, your task may require a certain format of the caption, or come with other restrictions or requirements that increase task's complexity. Few-shot prompting can be used to enable in-context learning. By providing examples in the prompt, you can steer the model to generate results that mimic the format of given examples. Let's use the previous image of the Eiffel Tower as an example for the model and build a prompt that demonstrates to the model that in addition to learning what the object in an image is, we would also like to get some interesting information about it. Then, let's see, if we can get the same response format for an image of the Statue of Liberty:
Photo by Juan Mayobre.
prompt = ["User:", "https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80", "Describe this image.\nAssistant: An image of the Eiffel Tower at night. Fun fact: the Eiffel Tower is the same height as an 81-storey building.\n", "User:", "https://images.unsplash.com/photo-1524099163253-32b7f0256868?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3387&q=80", "Describe this image.\nAssistant:" ] inputs = processor(prompt, return_tensors="pt").to("cuda") bad_words_ids = processor.tokenizer(["", ""], add_special_tokens=False).input_ids generated_ids = model.generate(**inputs, max_new_tokens=30, bad_words_ids=bad_words_ids) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) print(generated_text[0]) User: Describe this image. Assistant: An image of the Eiffel Tower at night. Fun fact: the Eiffel Tower is the same height as an 81-storey building. User: Describe this image. Assistant: An image of the Statue of Liberty. Fun fact: the Statue of Liberty is 151 feet tall.
Notice that just from a single example (i.e., 1-shot) the model has learned how to perform the task. For more complex tasks, feel free to experiment with a larger number of examples (e.g., 3-shot, 5-shot, etc.). Visual question answering Visual Question Answering (VQA) is the task of answering open-ended questions based on an image. Similar to image captioning it can be used in accessibility applications, but also in education (reasoning about visual materials), customer service (questions about products based on images), and image retrieval. Let's get a new image for this task:
Photo by Jarritos Mexican Soda. You can steer the model from image captioning to visual question answering by prompting it with appropriate instructions:
prompt = [ "Instruction: Provide an answer to the question. Use the image to answer.\n", "https://images.unsplash.com/photo-1623944889288-cd147dbb517c?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80", "Question: Where are these people and what's the weather like? Answer:" ] inputs = processor(prompt, return_tensors="pt").to("cuda") bad_words_ids = processor.tokenizer(["", ""], add_special_tokens=False).input_ids generated_ids = model.generate(**inputs, max_new_tokens=20, bad_words_ids=bad_words_ids) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) print(generated_text[0]) Instruction: Provide an answer to the question. Use the image to answer. Question: Where are these people and what's the weather like? Answer: They're in a park in New York City, and it's a beautiful day.
Image classification IDEFICS is capable of classifying images into different categories without being explicitly trained on data containing labeled examples from those specific categories. Given a list of categories and using its image and text understanding capabilities, the model can infer which category the image likely belongs to. Say, we have this image of a vegetable stand: Photo by Peter Wendt. We can instruct the model to classify the image into one of the categories that we have:
categories = ['animals','vegetables', 'city landscape', 'cars', 'office'] prompt = [f"Instruction: Classify the following image into a single category from the following list: {categories}.\n", "https://images.unsplash.com/photo-1471193945509-9ad0617afabf?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80", "Category: " ] inputs = processor(prompt, return_tensors="pt").to("cuda") bad_words_ids = processor.tokenizer(["", ""], add_special_tokens=False).input_ids generated_ids = model.generate(**inputs, max_new_tokens=6, bad_words_ids=bad_words_ids) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) print(generated_text[0]) Instruction: Classify the following image into a single category from the following list: ['animals', 'vegetables', 'city landscape', 'cars', 'office']. Category: Vegetables ```
In the example above we instruct the model to classify the image into a single category, however, you can also prompt the model to do rank classification. Image-guided text generation For more creative applications, you can use image-guided text generation to generate text based on an image. This can be useful to create descriptions of products, ads, descriptions of a scene, etc. Let's prompt IDEFICS to write a story based on a simple image of a red door: Photo by Craig Tidball.
prompt = ["Instruction: Use the image to write a story. \n", "https://images.unsplash.com/photo-1517086822157-2b0358e7684a?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=2203&q=80", "Story: \n"] inputs = processor(prompt, return_tensors="pt").to("cuda") bad_words_ids = processor.tokenizer(["", ""], add_special_tokens=False).input_ids generated_ids = model.generate(**inputs, num_beams=2, max_new_tokens=200, bad_words_ids=bad_words_ids) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) print(generated_text[0]) Instruction: Use the image to write a story. Story: Once upon a time, there was a little girl who lived in a house with a red door. She loved her red door. It was the prettiest door in the whole world.
One day, the little girl was playing in her yard when she noticed a man standing on her doorstep. He was wearing a long black coat and a top hat. The little girl ran inside and told her mother about the man. Her mother said, “Don’t worry, honey. He’s just a friendly ghost.” The little girl wasn’t sure if she believed her mother, but she went outside anyway. When she got to the door, the man was gone. The next day, the little girl was playing in her yard again when she noticed the man standing on her doorstep. He was wearing a long black coat and a top hat. The little girl ran
Looks like IDEFICS noticed the pumpkin on the doorstep and went with a spooky Halloween story about a ghost. For longer outputs like this, you will greatly benefit from tweaking the text generation strategy. This can help you significantly improve the quality of the generated output. Check out Text generation strategies to learn more.
Running inference in batch mode All of the earlier sections illustrated IDEFICS for a single example. In a very similar fashion, you can run inference for a batch of examples by passing a list of prompts:
prompts = [ [ "https://images.unsplash.com/photo-1543349689-9a4d426bee8e?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3501&q=80", "This is an image of ", ], [ "https://images.unsplash.com/photo-1623944889288-cd147dbb517c?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80", "This is an image of ", ], [ "https://images.unsplash.com/photo-1471193945509-9ad0617afabf?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=3540&q=80", "This is an image of ", ], ] inputs = processor(prompts, return_tensors="pt").to("cuda") bad_words_ids = processor.tokenizer(["", ""], add_special_tokens=False).input_ids generated_ids = model.generate(**inputs, max_new_tokens=10, bad_words_ids=bad_words_ids) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) for i,t in enumerate(generated_text): print(f"{i}:\n{t}\n") 0: This is an image of the Eiffel Tower in Paris, France.
1: This is an image of a couple on a picnic blanket. 2: This is an image of a vegetable stand.
IDEFICS instruct for conversational use For conversational use cases, you can find fine-tuned instructed versions of the model on the 🤗 Hub: HuggingFaceM4/idefics-80b-instruct and HuggingFaceM4/idefics-9b-instruct. These checkpoints are the result of fine-tuning the respective base models on a mixture of supervised and instruction fine-tuning datasets, which boosts the downstream performance while making the models more usable in conversational settings. The use and prompting for the conversational use is very similar to using the base models:
import torch from transformers import IdeficsForVisionText2Text, AutoProcessor device = "cuda" if torch.cuda.is_available() else "cpu" checkpoint = "HuggingFaceM4/idefics-9b-instruct" model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).to(device) processor = AutoProcessor.from_pretrained(checkpoint) prompts = [ [ "User: What is in this image?", "https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG", "",
"\nAssistant: This picture depicts Idefix, the dog of Obelix in Asterix and Obelix. Idefix is running on the ground.", "\nUser:", "https://static.wikia.nocookie.net/asterix/images/2/25/R22b.gif/revision/latest?cb=20110815073052", "And who is that?", "\nAssistant:", ], ]
--batched mode inputs = processor(prompts, add_end_of_utterance_token=False, return_tensors="pt").to(device) --single sample mode inputs = processor(prompts[0], return_tensors="pt").to(device) Generation args exit_condition = processor.tokenizer("", add_special_tokens=False).input_ids bad_words_ids = processor.tokenizer(["", ""], add_special_tokens=False).input_ids generated_ids = model.generate(**inputs, eos_token_id=exit_condition, bad_words_ids=bad_words_ids, max_length=100) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) for i, t in enumerate(generated_text): print(f"{i}:\n{t}\n")
Before you begin, make sure you have all the necessary libraries installed: pip install transformers datasets evaluate rouge_score We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: from huggingface_hub import notebook_login notebook_login() Load BillSum dataset Start by loading the smaller California state bill subset of the BillSum dataset from the 🤗 Datasets library:
from huggingface_hub import notebook_login notebook_login() Load BillSum dataset Start by loading the smaller California state bill subset of the BillSum dataset from the 🤗 Datasets library: from datasets import load_dataset billsum = load_dataset("billsum", split="ca_test") Split the dataset into a train and test set with the [~datasets.Dataset.train_test_split] method: billsum = billsum.train_test_split(test_size=0.2) Then take a look at an example:
billsum["train"][0] {'summary': 'Existing law authorizes state agencies to enter into contracts for the acquisition of goods or services upon approval by the Department of General Services. Existing law sets forth various requirements and prohibitions for those contracts, including, but not limited to, a prohibition on entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between spouses and domestic partners or same-sex and different-sex couples in the provision of benefits. Existing law provides that a contract entered into in violation of those requirements and prohibitions is void and authorizes the state or any person acting on behalf of the state to bring a civil action seeking a determination that a contract is in violation and therefore void. Under existing law, a willful violation of those requirements and prohibitions is a misdemeanor.\nThis bill would also prohibit a state agency from entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between employees on the basis of gender identity in the provision of benefits, as specified. By expanding the scope of a crime, this bill would impose a state-mandated local program.\nThe California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory provisions establish procedures for making that reimbursement.\nThis bill would provide that no reimbursement is required by this act for a specified reason.', 'text': 'The people of the State of California do enact as follows:\n\n\nSECTION 1.\nSection 10295.35 is added to the Public Contract Code, to read:\n10295.35.\n(a) (1) Notwithstanding any other law, a state agency shall not enter into any contract for the acquisition of goods or services in the amount of one hundred thousand dollars ($100,000) or more with a contractor that, in the provision of benefits, discriminates between employees on the basis of an employee’s or dependent’s actual or perceived gender identity, including, but not limited to, the employee’s or dependent’s identification as transgender.\n(2) For purposes of this section, “contract” includes contracts with a cumulative amount of one hundred thousand dollars ($100,000) or more per contractor in each fiscal year.\n(3) For purposes of this section, an employee health plan is discriminatory if the plan is not consistent with Section 1365.5 of the Health and Safety Code and Section 10140 of the Insurance Code.\n(4) The requirements of this section shall apply only to those portions of a contractor’s operations that occur under any of the following conditions:\n(A) Within the state.\n(B) On real property outside the state if the property is owned by the state or if the state has a right to occupy the property, and if the contractor’s presence at that location is connected to a contract with the state.\n(C) Elsewhere in the United States where work related to a state contract is being performed.\n(b) Contractors shall treat as confidential, to the maximum extent allowed by law or by the requirement of the contractor’s insurance provider, any request by an employee or applicant for employment benefits or any documentation of eligibility for benefits submitted by an employee or applicant for employment.\n(c) After taking all reasonable measures to find a contractor that complies with this section, as determined by the state agency, the requirements of this section may be waived under any of the following circumstances:\n(1) There is only one prospective contractor willing to enter into a specific contract with the state agency.\n(2) The contract is necessary to respond to an emergency, as determined by the state agency, that endangers the public health, welfare, or safety, or the contract is necessary for the provision of essential services, and no entity that complies with the requirements of this section capable of responding to the emergency is immediately available.\n(3) The requirements of this section violate, or are inconsistent with, the terms or conditions of a grant, subvention, or agreement, if the agency has made a good faith attempt to change the terms or conditions of any grant, subvention, or agreement to authorize application of this section.\n(4) The contractor is providing wholesale or bulk water, power, or natural gas, the conveyance or transmission of the same, or ancillary services, as required for ensuring reliable services in accordance with good utility practice, if the purchase of the same cannot practically be accomplished through the standard competitive bidding procedures and the contractor is not providing direct retail services to end users.\n(d) (1) A contractor shall not be deemed to discriminate in the provision of benefits if the contractor, in providing the benefits, pays the actual costs incurred in obtaining the benefit.\n(2) If a contractor is unable to provide a certain benefit, despite taking reasonable measures to do so, the contractor shall not be deemed to discriminate in the provision of benefits.\n(e) (1) Every contract subject to this chapter shall contain a statement by which the contractor certifies that the contractor is in compliance with this section.\n(2) The department or other contracting agency shall enforce this section pursuant to its existing enforcement powers.\n(3) (A) If a contractor falsely certifies that it is in compliance with this section, the contract with that contractor shall be subject to Article 9 (commencing with Section 10420), unless, within a time period specified by the department or other contracting agency, the contractor provides to the department or agency proof that it has complied, or is in the process of complying, with this section.\n(B) The application of the remedies or penalties contained in Article 9 (commencing with Section 10420) to a contract subject to this chapter shall not preclude the application of any existing remedies otherwise available to the department or other contracting agency under its existing enforcement powers.\n(f) Nothing in this section is intended to regulate the contracting practices of any local jurisdiction.\n(g) This section shall be construed so as not to conflict with applicable federal laws, rules, or regulations. In the event that a court or agency of competent jurisdiction holds that federal law, rule, or regulation invalidates any clause, sentence, paragraph, or section of this code or the application thereof to any person or circumstances, it is the intent of the state that the court or agency sever that clause, sentence, paragraph, or section so that the remainder of this section shall remain in effect.\nSEC. 2.\nSection 10295.35 of the Public Contract Code shall not be construed to create any new enforcement authority or responsibility in the Department of General Services or any other contracting agency.\nSEC. 3.\nNo reimbursement is required by this act pursuant to Section 6 of Article XIII\u2009B of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIII\u2009B of the California Constitution.', 'title': 'An act to add Section 10295.35 to the Public Contract Code, relating to public contracts.'}
There are two fields that you'll want to use: text: the text of the bill which'll be the input to the model. summary: a condensed version of text which'll be the model target. Preprocess The next step is to load a T5 tokenizer to process text and summary: from transformers import AutoTokenizer checkpoint = "google-t5/t5-small" tokenizer = AutoTokenizer.from_pretrained(checkpoint) The preprocessing function you want to create needs to:
from transformers import AutoTokenizer checkpoint = "google-t5/t5-small" tokenizer = AutoTokenizer.from_pretrained(checkpoint) The preprocessing function you want to create needs to: Prefix the input with a prompt so T5 knows this is a summarization task. Some models capable of multiple NLP tasks require prompting for specific tasks. Use the keyword text_target argument when tokenizing labels. Truncate sequences to be no longer than the maximum length set by the max_length parameter.
prefix = "summarize: " def preprocess_function(examples): inputs = [prefix + doc for doc in examples["text"]] model_inputs = tokenizer(inputs, max_length=1024, truncation=True) labels = tokenizer(text_target=examples["summary"], max_length=128, truncation=True) model_inputs["labels"] = labels["input_ids"] return model_inputs
labels = tokenizer(text_target=examples["summary"], max_length=128, truncation=True) model_inputs["labels"] = labels["input_ids"] return model_inputs To apply the preprocessing function over the entire dataset, use 🤗 Datasets [~datasets.Dataset.map] method. You can speed up the map function by setting batched=True to process multiple elements of the dataset at once: tokenized_billsum = billsum.map(preprocess_function, batched=True)
tokenized_billsum = billsum.map(preprocess_function, batched=True) Now create a batch of examples using [DataCollatorForSeq2Seq]. It's more efficient to dynamically pad the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint)
from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf")
from transformers import DataCollatorForSeq2Seq data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf") Evaluate Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 Evaluate library. For this task, load the ROUGE metric (see the 🤗 Evaluate quick tour to learn more about how to load and compute a metric): import evaluate rouge = evaluate.load("rouge")
import evaluate rouge = evaluate.load("rouge") Then create a function that passes your predictions and labels to [~evaluate.EvaluationModule.compute] to calculate the ROUGE metric: import numpy as np def compute_metrics(eval_pred): predictions, labels = eval_pred decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
result = rouge.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions] result["gen_len"] = np.mean(prediction_lens) return {k: round(v, 4) for k, v in result.items()} Your compute_metrics function is ready to go now, and you'll return to it when you setup your training. Train
Your compute_metrics function is ready to go now, and you'll return to it when you setup your training. Train If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here! You're ready to start training your model now! Load T5 with [AutoModelForSeq2SeqLM]: from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) At this point, only three steps remain:
Define your training hyperparameters in [Seq2SeqTrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer] will evaluate the ROUGE metric and save the training checkpoint. Pass the training arguments to [Seq2SeqTrainer] along with the model, dataset, tokenizer, data collator, and compute_metrics function. Call [~Trainer.train] to finetune your model.
training_args = Seq2SeqTrainingArguments( output_dir="my_awesome_billsum_model", evaluation_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, weight_decay=0.01, save_total_limit=3, num_train_epochs=4, predict_with_generate=True, fp16=True, push_to_hub=True, ) trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=tokenized_billsum["train"], eval_dataset=tokenized_billsum["test"], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) trainer.train()
Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model: trainer.push_to_hub() If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial here! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: from transformers import create_optimizer, AdamWeightDecay optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) Then you can load T5 with [TFAutoModelForSeq2SeqLM]: from transformers import TFAutoModelForSeq2SeqLM model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint)
Then you can load T5 with [TFAutoModelForSeq2SeqLM]: from transformers import TFAutoModelForSeq2SeqLM model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint) Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]: tf_train_set = model.prepare_tf_dataset( tokenized_billsum["train"], shuffle=True, batch_size=16, collate_fn=data_collator, ) tf_test_set = model.prepare_tf_dataset( tokenized_billsum["test"], shuffle=False, batch_size=16, collate_fn=data_collator, )
Configure the model for training with compile. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: import tensorflow as tf model.compile(optimizer=optimizer) # No loss argument!
import tensorflow as tf model.compile(optimizer=optimizer) # No loss argument! The last two things to setup before you start training is to compute the ROUGE score from the predictions, and provide a way to push your model to the Hub. Both are done by using Keras callbacks. Pass your compute_metrics function to [~transformers.KerasMetricCallback]: from transformers.keras_callbacks import KerasMetricCallback metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
from transformers.keras_callbacks import KerasMetricCallback metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) Specify where to push your model and tokenizer in the [~transformers.PushToHubCallback]: from transformers.keras_callbacks import PushToHubCallback push_to_hub_callback = PushToHubCallback( output_dir="my_awesome_billsum_model", tokenizer=tokenizer, ) Then bundle your callbacks together:
Then bundle your callbacks together: callbacks = [metric_callback, push_to_hub_callback] Finally, you're ready to start training your model! Call fit with your training and validation datasets, the number of epochs, and your callbacks to finetune the model: model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! For a more in-depth example of how to finetune a model for summarization, take a look at the corresponding PyTorch notebook or TensorFlow notebook.
For a more in-depth example of how to finetune a model for summarization, take a look at the corresponding PyTorch notebook or TensorFlow notebook. Inference Great, now that you've finetuned a model, you can use it for inference! Come up with some text you'd like to summarize. For T5, you need to prefix your input depending on the task you're working on. For summarization you should prefix your input as shown below:
text = "summarize: The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country. It'll lower the deficit and ask the ultra-wealthy and corporations to pay their fair share. And no one making under $400,000 per year will pay a penny more in taxes."
The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for summarization with your model, and pass your text to it:
from transformers import pipeline summarizer = pipeline("summarization", model="stevhliu/my_awesome_billsum_model") summarizer(text) [{"summary_text": "The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country."}] You can also manually replicate the results of the pipeline if you'd like:
You can also manually replicate the results of the pipeline if you'd like: Tokenize the text and return the input_ids as PyTorch tensors: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_billsum_model") inputs = tokenizer(text, return_tensors="pt").input_ids
from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_billsum_model") inputs = tokenizer(text, return_tensors="pt").input_ids Use the [~transformers.generation_utils.GenerationMixin.generate] method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the Text Generation API.
from transformers import AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("stevhliu/my_awesome_billsum_model") outputs = model.generate(inputs, max_new_tokens=100, do_sample=False) Decode the generated token ids back into text:
Decode the generated token ids back into text: tokenizer.decode(outputs[0], skip_special_tokens=True) 'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.' `` </pt> <tf> Tokenize the text and return theinput_ids` as TensorFlow tensors:
from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_billsum_model") inputs = tokenizer(text, return_tensors="tf").input_ids Use the [~transformers.generation_tf_utils.TFGenerationMixin.generate] method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the Text Generation API.
from transformers import TFAutoModelForSeq2SeqLM model = TFAutoModelForSeq2SeqLM.from_pretrained("stevhliu/my_awesome_billsum_model") outputs = model.generate(inputs, max_new_tokens=100, do_sample=False) Decode the generated token ids back into text:
Decode the generated token ids back into text: tokenizer.decode(outputs[0], skip_special_tokens=True) 'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.'
Mask Generation Mask generation is the task of generating semantically meaningful masks for an image. This task is very similar to image segmentation, but many differences exist. Image segmentation models are trained on labeled datasets and are limited to the classes they have seen during training; they return a set of masks and corresponding classes, given an image. Mask generation models are trained on large amounts of data and operate in two modes. - Prompting mode: In this mode, the model takes in an image and a prompt, where a prompt can be a 2D point location (XY coordinates) in the image within an object or a bounding box surrounding an object. In prompting mode, the model only returns the mask over the object that the prompt is pointing out. - Segment Everything mode: In segment everything, given an image, the model generates every mask in the image. To do so, a grid of points is generated and overlaid on the image for inference. Mask generation task is supported by Segment Anything Model (SAM). It's a powerful model that consists of a Vision Transformer-based image encoder, a prompt encoder, and a two-way transformer mask decoder. Images and prompts are encoded, and the decoder takes these embeddings and generates valid masks.
SAM serves as a powerful foundation model for segmentation as it has large data coverage. It is trained on SA-1B, a dataset with 1 million images and 1.1 billion masks. In this guide, you will learn how to: - Infer in segment everything mode with batching, - Infer in point prompting mode, - Infer in box prompting mode. First, let's install transformers: pip install -q transformers Mask Generation Pipeline The easiest way to infer mask generation models is to use the mask-generation pipeline. thon
pip install -q transformers Mask Generation Pipeline The easiest way to infer mask generation models is to use the mask-generation pipeline. thon from transformers import pipeline checkpoint = "facebook/sam-vit-base" mask_generator = pipeline(model=checkpoint, task="mask-generation")
from transformers import pipeline checkpoint = "facebook/sam-vit-base" mask_generator = pipeline(model=checkpoint, task="mask-generation") Let's see the image. thon from PIL import Image import requests img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg" image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
Let's segment everything. points-per-batch enables parallel inference of points in segment everything mode. This enables faster inference, but consumes more memory. Moreover, SAM only enables batching over points and not the images. pred_iou_thresh is the IoU confidence threshold where only the masks above that certain threshold are returned. python masks = mask_generator(image, points_per_batch=128, pred_iou_thresh=0.88) The masks looks like the following:
{'masks': [array([[False, False, False, , True, True, True], [False, False, False, , True, True, True], [False, False, False, , True, True, True], , [False, False, False, , False, False, False], [False, False, False, , False, False, False], [False, False, False, , False, False, False]]), array([[False, False, False, , False, False, False], [False, False, False, , False, False, False], [False, False, False, , False, False, False], , 'scores': tensor([0.9972, 0.9917, , } We can visualize them like this: thon import matplotlib.pyplot as plt plt.imshow(image, cmap='gray') for i, mask in enumerate(masks["masks"]): plt.imshow(mask, cmap='viridis', alpha=0.1, vmin=0, vmax=1) plt.axis('off') plt.show()
Below is the original image in grayscale with colorful maps overlaid. Very impressive. Model Inference Point Prompting You can also use the model without the pipeline. To do so, initialize the model and the processor. thon from transformers import SamModel, SamProcessor device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = SamModel.from_pretrained("facebook/sam-vit-base").to(device) processor = SamProcessor.from_pretrained("facebook/sam-vit-base")
To do point prompting, pass the input point to the processor, then take the processor output and pass it to the model for inference. To post-process the model output, pass the outputs and original_sizes and reshaped_input_sizes we take from the processor's initial output. We need to pass these since the processor resizes the image, and the output needs to be extrapolated. thon input_points = [[[2592, 1728]]] # point location of the bee inputs = processor(image, input_points=input_points, return_tensors="pt").to(device) with torch.no_grad(): outputs = model(**inputs) masks = processor.image_processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu()) `` We can visualize the three masks in themasks` output. thon import torch import matplotlib.pyplot as plt import numpy as np fig, axes = plt.subplots(1, 4, figsize=(15, 5)) axes[0].imshow(image) axes[0].set_title('Original Image') mask_list = [masks[0][0][0].numpy(), masks[0][0][1].numpy(), masks[0][0][2].numpy()] for i, mask in enumerate(mask_list, start=1): overlayed_image = np.array(image).copy() overlayed_image[:,:,0] = np.where(mask == 1, 255, overlayed_image[:,:,0]) overlayed_image[:,:,1] = np.where(mask == 1, 0, overlayed_image[:,:,1]) overlayed_image[:,:,2] = np.where(mask == 1, 0, overlayed_image[:,:,2])
axes[i].imshow(overlayed_image) axes[i].set_title(f'Mask {i}') for ax in axes: ax.axis('off') plt.show()
Box Prompting You can also do box prompting in a similar fashion to point prompting. You can simply pass the input box in the format of a list [x_min, y_min, x_max, y_max] format along with the image to the processor. Take the processor output and directly pass it to the model, then post-process the output again. thon bounding box around the bee box = [2350, 1600, 2850, 2100] inputs = processor( image, input_boxes=[[[box]]], return_tensors="pt" ).to("cuda") with torch.no_grad(): outputs = model(**inputs) mask = processor.image_processor.post_process_masks( outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu() )[0][0][0].numpy()
You can visualize the bounding box around the bee as shown below. thon import matplotlib.patches as patches fig, ax = plt.subplots() ax.imshow(image) rectangle = patches.Rectangle((2350, 1600, 500, 500, linewidth=2, edgecolor='r', facecolor='none') ax.add_patch(rectangle) ax.axis("off") plt.show() You can see the inference output below. thon fig, ax = plt.subplots() ax.imshow(image) ax.imshow(mask, cmap='viridis', alpha=0.4) ax.axis("off") plt.show()
Zero-shot object detection [[open-in-colab]] Traditionally, models used for object detection require labeled image datasets for training, and are limited to detecting the set of classes from the training data. Zero-shot object detection is supported by the OWL-ViT model which uses a different approach. OWL-ViT is an open-vocabulary object detector. It means that it can detect objects in images based on free-text queries without the need to fine-tune the model on labeled datasets. OWL-ViT leverages multi-modal representations to perform open-vocabulary detection. It combines CLIP with lightweight object classification and localization heads. Open-vocabulary detection is achieved by embedding free-text queries with the text encoder of CLIP and using them as input to the object classification and localization heads. associate images and their corresponding textual descriptions, and ViT processes image patches as inputs. The authors of OWL-ViT first trained CLIP from scratch and then fine-tuned OWL-ViT end to end on standard object detection datasets using a bipartite matching loss. With this approach, the model can detect objects based on textual descriptions without prior training on labeled datasets. In this guide, you will learn how to use OWL-ViT: - to detect objects based on text prompts - for batch object detection - for image-guided object detection Before you begin, make sure you have all the necessary libraries installed:
pip install -q transformers Zero-shot object detection pipeline The simplest way to try out inference with OWL-ViT is to use it in a [pipeline]. Instantiate a pipeline for zero-shot object detection from a checkpoint on the Hugging Face Hub: thon from transformers import pipeline checkpoint = "google/owlv2-base-patch16-ensemble" detector = pipeline(model=checkpoint, task="zero-shot-object-detection")
from transformers import pipeline checkpoint = "google/owlv2-base-patch16-ensemble" detector = pipeline(model=checkpoint, task="zero-shot-object-detection") Next, choose an image you'd like to detect objects in. Here we'll use the image of astronaut Eileen Collins that is a part of the NASA Great Images dataset. import skimage import numpy as np from PIL import Image image = skimage.data.astronaut() image = Image.fromarray(np.uint8(image)).convert("RGB") image
import skimage import numpy as np from PIL import Image image = skimage.data.astronaut() image = Image.fromarray(np.uint8(image)).convert("RGB") image Pass the image and the candidate object labels to look for to the pipeline. Here we pass the image directly; other suitable options include a local path to an image or an image url. We also pass text descriptions for all items we want to query the image for.
predictions = detector( image, candidate_labels=["human face", "rocket", "nasa badge", "star-spangled banner"], ) predictions [{'score': 0.3571370542049408, 'label': 'human face', 'box': {'xmin': 180, 'ymin': 71, 'xmax': 271, 'ymax': 178}}, {'score': 0.28099656105041504, 'label': 'nasa badge', 'box': {'xmin': 129, 'ymin': 348, 'xmax': 206, 'ymax': 427}}, {'score': 0.2110239565372467, 'label': 'rocket', 'box': {'xmin': 350, 'ymin': -1, 'xmax': 468, 'ymax': 288}}, {'score': 0.13790413737297058, 'label': 'star-spangled banner', 'box': {'xmin': 1, 'ymin': 1, 'xmax': 105, 'ymax': 509}}, {'score': 0.11950037628412247, 'label': 'nasa badge', 'box': {'xmin': 277, 'ymin': 338, 'xmax': 327, 'ymax': 380}}, {'score': 0.10649408400058746, 'label': 'rocket', 'box': {'xmin': 358, 'ymin': 64, 'xmax': 424, 'ymax': 280}}]
Let's visualize the predictions: from PIL import ImageDraw draw = ImageDraw.Draw(image) for prediction in predictions: box = prediction["box"] label = prediction["label"] score = prediction["score"] xmin, ymin, xmax, ymax = box.values() draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) draw.text((xmin, ymin), f"{label}: {round(score,2)}", fill="white") image
xmin, ymin, xmax, ymax = box.values() draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) draw.text((xmin, ymin), f"{label}: {round(score,2)}", fill="white") image Text-prompted zero-shot object detection by hand Now that you've seen how to use the zero-shot object detection pipeline, let's replicate the same result manually. Start by loading the model and associated processor from a checkpoint on the Hugging Face Hub. Here we'll use the same checkpoint as before:
from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection model = AutoModelForZeroShotObjectDetection.from_pretrained(checkpoint) processor = AutoProcessor.from_pretrained(checkpoint) Let's take a different image to switch things up. import requests url = "https://unsplash.com/photos/oj0zeY2Ltk4/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MTR8fHBpY25pY3xlbnwwfHx8fDE2Nzc0OTE1NDk&force=true&w=640" im = Image.open(requests.get(url, stream=True).raw) im
Use the processor to prepare the inputs for the model. The processor combines an image processor that prepares the image for the model by resizing and normalizing it, and a [CLIPTokenizer] that takes care of the text inputs. text_queries = ["hat", "book", "sunglasses", "camera"] inputs = processor(text=text_queries, images=im, return_tensors="pt")
text_queries = ["hat", "book", "sunglasses", "camera"] inputs = processor(text=text_queries, images=im, return_tensors="pt") Pass the inputs through the model, post-process, and visualize the results. Since the image processor resized images before feeding them to the model, you need to use the [~OwlViTImageProcessor.post_process_object_detection] method to make sure the predicted bounding boxes have the correct coordinates relative to the original image:
import torch with torch.no_grad(): outputs = model(**inputs) target_sizes = torch.tensor([im.size[::-1]]) results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)[0] draw = ImageDraw.Draw(im) scores = results["scores"].tolist() labels = results["labels"].tolist() boxes = results["boxes"].tolist() for box, score, label in zip(boxes, scores, labels): xmin, ymin, xmax, ymax = box draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) draw.text((xmin, ymin), f"{text_queries[label]}: {round(score,2)}", fill="white") im