Fine-tunined the t5-small model
This is a text summarization fine-tuned model based on t5-small architecture with cnn_dailymail dataset.
Usage
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("JayasakthiBalaji/Text_Summarization_2e-5")
model = AutoModelForSeq2SeqLM.from_pretrained("JayasakthiBalaji/Text_Summarization_2e-5")
text = "Type your long story for summarization...."
inputs = tokenizer("summarize: " + text, return_tensors="pt", max_length=512, truncation=True)
outputs = model.generate(inputs.input_ids, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True)
summary = tokenizer.decode(outputs, skip_special_tokens=True)
print(summary)
- Downloads last month
- 93
Model tree for JayasakthiBalaji/Text_Summarization_2e-5
Base model
google-t5/t5-small