English-to-Tamil / README.md
aishu15's picture
Update README.md
837e08c verified
metadata
license: apache-2.0
language:
  - ta
  - en
pipeline_tag: text2text-generation
datasets:
  - aishu15/aryaumeshl

Usage

To use this model, you can either directly use the Hugging Face transformers library or you can use the model via the Hugging Face inference API.

Model Information

Training Details

  • This model has been fine-tuned for English to Tamil translation.
  • Training Duration: Over 10 hours
  • Loss Achieved: 0.6
  • Model Architecture
  • The model architecture is based on the Transformer architecture, specifically optimized for sequence-to-sequence tasks.

Installation

To use this model, you'll need to have the transformers library installed. You can install it via pip:

pip install transformers

Via Transformers Library

You can use this model in your Python code like this:

Inference

  1. How to use the model in our notebook:
# Load model directly
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
checkpoint = "aishu15/English-to-Tamil"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
def language_translator(text):
    tokenized = tokenizer([text], return_tensors='pt')
    out = model.generate(**tokenized, max_length=128)
    return tokenizer.decode(out[0],skip_special_tokens=True)
text_to_translate = "hardwork never fail"
output = language_translator(text_to_translate)
print(output)