Image-text-to-text models take in an image and text prompt and output text. These models are also called vision-language models, or VLMs. The difference from image-to-text models is that these models take an additional text input, not restricting the model to certain use cases like image captioning, and may also be trained to accept a conversation as input.
For more details about the image-text-to-text
task, check out its dedicated page! You will find examples and related materials.
Explore all available models and find the one that suits you best here.
[object Object],[object Object]
To use the Python client, see huggingface_hub
’s package reference.
For the API specification of conversational image-text-to-text models, please refer to the Chat Completion API documentation.
< > Update on GitHub