How to use the model

Import model and tokenizer from transformer libray

# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("tykea/mBart-large-50-KQA")
model = AutoModelForSeq2SeqLM.from_pretrained("tykea/mBart-large-50-KQA")

Define function to take question and pass to the model

import torch

#ask function for easier asking 
def ask(custom_question):
# Tokenize the input
    inputs = tokenizer(
        f"qestion: {custom_question}",
        return_tensors="pt",
        truncation=True,
        max_length=512,
        padding="max_length"
    )

    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    inputs = {key: value.to(device) for key, value in inputs.items()}

    model.eval()
    with torch.no_grad():
        outputs = model.generate(
        input_ids=inputs["input_ids"],
        max_length=50,
        num_beams=4, 
        repetition_penalty=2.0,
        early_stopping=True,
        do_sample=True,     
        top_k=50,           
        top_p=0.95,         
        temperature=0.7,
    )
    answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
    print(f"Question: {custom_question}")
    print(f"Answer: {answer}")

Then call the function #ask function

question = "αžαžΎαž”αŸ’αž’αžΌαž“αž€αžΎαžαž“αŸ…αž”αŸ’αžšαž‘αŸαžŸαžŽαžΆ?"
ask(question)
#output
Question: αžαžΎαž”αŸ’αž’αžΌαž“αž€αžΎαžαž“αŸ…αž”αŸ’αžšαž‘αŸαžŸαžŽαžΆ?
Answer: αž”αŸ’αž’αžΌαž“αž€αžΎαžαž“αŸ…αž”αŸ’αžšαž‘αŸαžŸαž…αž·αž“
Downloads last month
123
Safetensors
Model size
611M params
Tensor type
F32
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for tykea/mBart-large-50-KQA

Finetuned
(140)
this model

Dataset used to train tykea/mBart-large-50-KQA