YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
spanish-dialoGPT - GGUF
- Model creator: https://huggingface.co/emre/
- Original model: https://huggingface.co/emre/spanish-dialoGPT/
Name | Quant method | Size |
---|---|---|
spanish-dialoGPT.Q2_K.gguf | Q2_K | 0.08GB |
spanish-dialoGPT.IQ3_XS.gguf | IQ3_XS | 0.08GB |
spanish-dialoGPT.IQ3_S.gguf | IQ3_S | 0.08GB |
spanish-dialoGPT.Q3_K_S.gguf | Q3_K_S | 0.08GB |
spanish-dialoGPT.IQ3_M.gguf | IQ3_M | 0.09GB |
spanish-dialoGPT.Q3_K.gguf | Q3_K | 0.09GB |
spanish-dialoGPT.Q3_K_M.gguf | Q3_K_M | 0.09GB |
spanish-dialoGPT.Q3_K_L.gguf | Q3_K_L | 0.1GB |
spanish-dialoGPT.IQ4_XS.gguf | IQ4_XS | 0.1GB |
spanish-dialoGPT.Q4_0.gguf | Q4_0 | 0.1GB |
spanish-dialoGPT.IQ4_NL.gguf | IQ4_NL | 0.1GB |
spanish-dialoGPT.Q4_K_S.gguf | Q4_K_S | 0.1GB |
spanish-dialoGPT.Q4_K.gguf | Q4_K | 0.11GB |
spanish-dialoGPT.Q4_K_M.gguf | Q4_K_M | 0.11GB |
spanish-dialoGPT.Q4_1.gguf | Q4_1 | 0.11GB |
spanish-dialoGPT.Q5_0.gguf | Q5_0 | 0.11GB |
spanish-dialoGPT.Q5_K_S.gguf | Q5_K_S | 0.11GB |
spanish-dialoGPT.Q5_K.gguf | Q5_K | 0.12GB |
spanish-dialoGPT.Q5_K_M.gguf | Q5_K_M | 0.12GB |
spanish-dialoGPT.Q5_1.gguf | Q5_1 | 0.12GB |
spanish-dialoGPT.Q6_K.gguf | Q6_K | 0.13GB |
spanish-dialoGPT.Q8_0.gguf | Q8_0 | 0.17GB |
Original model description:
tags: - conversational license: mit datasets: - bigscience/opensubtitles language: - es pipeline_tag: text-generation
Finetuned DialoGPT model on Spanish Conversations
This model was finetuned from the original DialoGPT-medium model on subtitles from Spanish movies and telenovelas from the awesome OpenSubtitle dataset.
DialoGPT paper: https://arxiv.org/abs/1911.00536
Some example dialog from the model:
Role | Response |
---|---|
User | ¿Cómo está el clima hoy? |
Bot | Muy bien. |
User | Eso es bueno escuchar. ¿Vas a ir a la playa? |
Bot | No, también puedo regresar. |
User | Oh bien, bueno, iré a la playa más tarde. |
Bot | ¡No! |
User | ¿Qué está mal con eso? |
Bot | ¡Cómo que no! |
User | Estoy confundido, ¿por qué no puedo ir a la playa? |
Bot | ¡Cómo que no! |
User | Explícamelo por favor. |
Bot | ¡No! |
Using the model
Example code for trying out the model (taken directly from the DialoGPT-medium model card):
from transformers import AutoModelWithLMHead, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("emre/spanish-dialoGPT")
model = AutoModelWithLMHead.from_pretrained("emre/spanish-dialoGPT")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
- Downloads last month
- 24
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.